Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
3,200
39
412 CAPACITY FOR PATTERNS AND SEQUENCES IN KANERVA'S SDM AS COMPARED TO OTHER ASSOCIATIVE MEMORY MODELS James O. Keeler Chemistry Department, Stanford University, Stanford, CA 94305 and RIACS, NASA-AMES 230-5 Moffett Field, CA 94035. e-mail: [email protected] ABSTRACT The information capacity of Kanerva's Sparse, Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used here, it is shown that the total information stored in these systems is proportional to the number connections in the network. The proportionality constant is the same for the SDM and HopJreld-type models independent of the particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences of spatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns. INTRODUCTION Many different models of memory and thought have been proposed by scientists over the years. In (1943) McCulloch and Pitts prorosed a simple model neuron with two states of activity (on and off) and a large number of inputs. Hebb (1949) considered a network of such neurons and postulated mechanisms for changing synaptic strengths 2 to learn memories. The learning rule considered here uses the outer-product of patterns of +Is and -Is. Anderson (1977) discussed the effect of iterative feedback in such a system.) Hopfield (1982) showed that for symmetric connections,4 the dynamics of such a network is governed by an energy function that is analogous to the energy function of a spin glass ..5 Numerous investigations have been carried out on similar models. 6-8 Several limitations of these binary interaction, outer-product models have been pointed out. For example, the number of patterns that can be stored in the system (its capacity) is limited to a fraction of the length of the pattern vectors. Also, these models are not very successful at storing correlated patterns or temporal sequences. Other models have been proposed to overcome these limitations. For example, one can allow higher-order interactions among the neurons. 9 ?10 In the following, I focus on a model developed by Kanerva (1984) called the Sparse, Distributed Memory (SOM) model. J1 The SOM can be viewed as a three layer network that uses an outer-product learning between the second and third layer. As discussed below, the SDM is more versatile than the above mentioned networks because the number of stored patterns can increased independent of the length of the pattern. and the SDM can be used to store spatiotemporal patterns with context retrieval. and store correlated patterns. The capacity limitations of outer-product models can be alleviated by using higher-order interaction models or the SOM, but a price must be paid for this added capacity in tenns of an increase in the number of connections. How much information is gained per connection? It is shown in the following that the total infonnation stored in each system IS proportional to the number of connections in the network. and that the proportionality constant is independent of the particular model or the order of the model. This result also holds if the connections are limited to one bit of precision (clipped weights). The analysis presented here requires certain simplifying assumptions. The approximate results are compared numerically to an exact calculation developed by Chou. 12 SIMPLE OUTER?PRODUCT NEURAL NETWORK MODEL As an example or a simple first-order neural network model, I consider in detail the model developed by Hopfield.4 This model will be used to introduce the mathematics and the concepts that will be generalized for the analysis of the SOM. The "neurons" are simple two-state ? American Institute of Phv!lir.s 19M 413 threshold devices: The state of the i'" neuron. Uj. is either either +1 (on). or -1 (off). Consider a set of n such neurons with net input (local field). h j ? to the i'" neuron given by /I =V j j hj (1) Uj. j where T jj represents the interaction strength between the i'" neuron and the j"'. The state of each neuron is updated asynchronously (at random) according to the rule +-g (h j ). where the function g is a simple threshold function g (x) (2) Uj = sign (x). Suppose we are given M randomly chosen patterns (strings of length n of ?ls) which we wish to store in this system. Denote these M memory patterns as pattern vectors: pQ = (p f,pf .... ,p"Q). ex = 1.2.3, ... ,M. For example. pI might look like (+1,-1.+1,-1.-1 ?...?+1). One method of storing these patterns is the outer-product (Hebbian) learning rule: Start with T=O, and accumulate the outer-products of the pattern vectors. The resulting connection matrix is given by T jj = M LPjQpt, Tjj =o. (3) =1 The system described above is a dynamical system with attracting fixed points. To obtain an approximate upper bound on the total information stored in this network. we sidestep the issue of the basins of attraction, and we check to see if each of the patterns stored by Eq. (3) is actually a fixed point of (2). Suppose we are given one of the patterns, p~, say, as the initial configuration of the neurons. I will show that p~ is expected to be a fixed point of Eq. (2). After inserting (3) for T into (1), the net input to the i'" neuron becomes hj M " =1 j = LPt[ L ptpl]? (4) The important term in the sum on ex is the one for which ex =~ . This term represents the ? 'signal" between the input p~ and the desired output. The rest of the sum represents "noise" resulting from crosstalk with all of the other stored patterns. The expression for the net input becomes h j = signal; + noisej where signalj =Pj~[L" pI pI], (5) j noisej M =L pjQ[ " 2. pt pI]? (6) Q;t~ Summing on all of the h, in (6) yields signalj = (n-l)pj~. Since n is positive, the sign of the signal term and pj~ will be the same. Thus. if the noise term were exactly zero, the signal would give the same sign as pj~ with a magnitude of:: n d ? and p~ would be a fixed point of (2). Moreover, patterns close to pI' would give nearly the same signal, so that p~ should be an attracting fixed point. For randomly chosen patterns. <noise> = 0, where < > indicates statistical expectation. and its variance will be a'- = (n-l)d (M -1). The probability that there will be an error on recall of pj~ is given by the probability that the noise is greater than the signal. For n large, the noise distribution is approximately gaussian, and the probability that there is an error in the i'" bit is (7) INFORMATION CAPACITY The number of patterns that can be stored in the network is known as its capacityP?14 However, for a fair comparison between all of the models discussed here. it is more relevant to compare the total number of bits (total information) stored in each model rather than the number of 414 patterns. This allows comparison of information storage in models with different lengths of the pattern vectors. If we view the memory model as a black box which receives input bit strings and outputs them with some small probability of error in each bit, then the definition of bit-capacity used here is exactly the definition of channel capacity used by Shannon. 1S Define the bit-capacity as the number of bits that can be stored in a network with fixed probability of ~etting an error in a recalled bit, i.e. Pe = constant in (10). Explicitly, the bit-capacity is given by 6 B = bit capacity =nMll, (8) where 11 = (I + Pelog2fJe + (l-Pe )log2(I-Pe)). Note that 11=1 for Pe =0. Setting Pe to a constant is tantamount to keeping the signal-to-noise ratio (fidelity) constant, where the fidelity, R, is given by R I signail/a. Explicitly, the relation between (constant) Pe and R, is just R = ~-I(l - Pe ), where = R ~R) = (1I2Jt)'h Je-t212dt. (9) Hence, the bit-capacity of these networks can be investigated by examining the fidelity of the models as a function of n, M, and R. From (8) and (9) the fidelity of the Hopfield model is is R2 nl(n(M-l?Y.. (n>I). Solving for M in terms of (fixed) R and 11, the bit-capacity becomes = B =l1[(n 2IR 2)+n]. The results above can be generalized to models with d th order interactions.17,18 The resulting expression for the bit-capacity for d,h order interaction models is just n d +1 B =11[-2 +n]. (10) R Hence, we see that the number of bits stored in the system increases with the order d. However, to store these bits, one must pay a price by including more connections in the connection tensor. To demonstrate the relationship between the number of connections and the information stored, define the information capacity, y, to be the total information stored in the network divided by the number of bits in the connection tensor (note that this is different than the definition used by AbuMostafa et al.).19 Thus y is just the bit-capacity divided by the number of bits in the tensor T { and represents the efficiency with which information is stored in the network. Since T has n d + elements, the information capacity is found to be Y -....!L 2 - R b' (11) where b is the number of bits of precision used per tensor element (b ~ log2M for no clipping of the weights). For large n, the information stored per neuronal coimection is y = lllR 2b, independent of the order of the model (compare this result to that of Peretto, et al.).11J To illustrate this point, suppose one decides that the maximum allowed probability of getting an error in a recalled bit is Pe = 111000, then this would fix the minimum value of R at 3.1. Thus, to store 10,000 bits with a probability of getting an error of a recalled bit of 0.001, equation (15) states that it would take =96,OOOb bits, independent of the order of the model, or =O.ln patterns can be stored with probability 111000 of getting an error in a recalled bit. KANERVA'S SDM Now we focus our attention on Ka.nerva's Sparse, Distributed Memory model (SDM).l1 The SDM can be viewed as a 3-layer network with the middle layer playing the role of hidden units. To get an autoassociative network, the output layer can be fed back into the input layer, effectively making this a two layer network. The first layer of the SDM is a layer of n, ?l input units (the input address, a), the middle layer is a layer of m, hidden units, S, and the third layer consists of the n ?l output units (the data, d). The connections between the input units and the hidden units are random weights of ?I and are given by the m xn matrix A. The connections between the hidden units and the output units are given by the n xm connection matrix C, and these matrix elements are modified by an outer-product learning rule (C ii analogous to the matrix T of the Hopfield model). 415 Given an input pattern a, the hidden unit activations are determined by s = Or (A a), (12) where Or is the Hamming-distance threshold function: The k'" element is 1 if the input a is at most r Hamming units away from the k,1t row in A, and 0 if it is further than r units away, i.e., ?r(x)j = { I if Y2(n -Xj)~ 0 if Y2(n-x;?r . (13) The hidden-units vector, or select vector, s, is mostly Os with an average of Sm 1s, where S is some small number dependent on r; S<l. Hence, s represents a large. sparsely coded vector of Os and SIs representing the input address. The net input, h. to the final layer can be simply expressed as the product of C with s: h= Cs. (14) Finally, the output data is given by d = g(h), where gj (h j ) = sign (h j ). To store the M patterns, pl,p2, ... pM, form the outer-product of these pattern vectors and their corresponding select vectors, (15) where T denotes the trampose of the vector, and where each select vector is formed by the corresponding address, so. = Or (A pa). The storage algorithm (15) is an outer-product learning rule similar to (3). Suppose that the M patterns (pl,p2, ... pM) have been stored according to (15). Following the analysis presented for the Hopfield model, I show that if the ~stem is presented with p~ as input, the output will be p~. (i.e. p~ is a fixed point). Setting a = p in (16) and separating terms as before. the net input (18) becomes M h = dl3(s~'s~) + L pa(sa?s~). (16) a.t~ where the first term represents the signal and the second is the noise. Recall that the select vectors have an average of Sm Is and the remainder Os, so that the expected value of the signal is &n srJ. Assuming that the addresses and data are randomly chosen, the expected value of the noise is zero. To evaluate the fidelity, I make certain approximations. First, I assume that the select vectors are independent of each other. Second, I assume that the variance of the signal alone is zero or small compared to the variance of noise term alone. The first assumption will be valid for m S2<1, and the second assumption will be valid for M S>l. With these assumptions, we can easily calculate the variance of the noise term, because each of the select vectors are i.i.d. vectors of length m with mostly Os and::&n Is. With these assumptions, the fidelity is given by (17) In the limit of large m , with Sm :: constant, the number of stored bits scales as mn B - n[ + n] - '. R2(1+S2m) . (18) If we divide this by the number of elements in C, we find the information capacity, y = 1l1R2b, just as before, so the information capacity is the same for the two models. (If we divide the bit capacity by the number of elements in C and A then we get y =1lIR2(b+1), which is about the same for large M .) A few comments before we continue. FIrst, it should be pointed out that the assumption made by Kanerva 11 and Keeler 17 ,18 that the variance of the signal term is much less than that of the noise is not valid over the entire range. If we took this infO account, then the magnitude of the denominator would be increased by the variance of the signal term. Further, if we read at a distance I away from the write address, then it is easy to see that the signal changes to be m S(l), where &(1) the overlap of two spheres of radius r length I apart in the binomial space n 416 (8 = ~O?. The fidelity for reading at a distance I away from the write address is m 2 82(l) R2-----------------~~----~~----- m~IXl~l) + (M-l)m82+(M-l)84m2(I-lIm) ' (19) Compare this to the formula derived by ChOU,12 for the exact signal-to-noise ratio: m 282(l) R2-----------------~~~ - m~IXI_8(I? ____ ________ ~ + (M-l)mJ.l.. ".+(M-l)0;".m2(I-lIm? ' (20) where J.l.. " is the average overlap of the spheres of radius r binomially distributed with parameters is the square of this overlap. The difference in these two formulas lies in the denominator in the terms 82 verses J.l.. ". and 84 vs. 0;".. The difference comes from the fact that Chou correctly calculates the overlap of the spheres without using the independence assumption. How do these formula's differ? First of all, it is found numerically that 82 is identical with J.l.. ".. Hence, the only difference comes from 84 verses 0;".. For m82 < 1, the 84 term is negligible compared to the other terms in the denominator. In addition, 84 and 0 2 are approximately equal for large n and r=n 12. Hence, in the limit n ~oo the two fonnulas agree over most of the range if M=O.lm, m<2". However, for finite n, the two fonnulas can disagree when m82=1 (see Figure 1). (n ,112) and cr 30 Signal-to-Noise Ratios 20 + Eq. (17) o Eq. (19) * Eq. (20) 10 o o 20 40 60 80 Hamming Radius Figure 1: A ~omparison of the fidelity calculations of the SDM for typical n, M, andm values. Eq~atlon (17) .was derived assuming no variance of the signal term, and is shown ~y the + line. Equauon (19) ~ses the approximation that all of the select vectors are indePf2ndeot denoted by the 0 line. EquatIon (20) (?'s) is the exact derivation done by Chou . The values used here were n 150, m = 2000, M = 100. = 417 Equation (20) suggests that the5 is a best read-write Hamming radius for the SOM. By setting I =0 in (19) and by setting ~ = 0, we get an approximate expression for the best Ham- 8"..., =(2Mntll3. This ttend is qualitatively shown in Figure 2. ming radius: II) Q) u c: o L. :> I.) I.) o ......o o Figure 2: Numerical investigation of the capacity of the SOM. The vertical axis is the percent of recovered patterns with no errors. The x-axis (left to right) is the Hamming distance used for reading and writing. The y-axis (back to forward) is the number of patterns that were written into the memory. For this investigation, n = 128, m = 1024, and M ranges from 1 to 501. Note the similarity of a cross-section of this graph at constant M with Figure 1. This calculation was performed by Oavid Cohn at RIACS, NASA-Ames. Figure 1 indicates that the fonnula (17) that neglected the variance of the signal term is incorrect over much of the range. However, a variant of the SOM is to constrain the number of selected locations to be constant; circuitry for doing this is easily built. 21 The variance of the signal term would be zero in that case, and the approximate expression for the fidelity is given by Eq. (17). There are certain problems where it would be better to keep 8 = constant, as in the case of correlated patterns (see below). The above analysis was done assuming that the elements (weights) in the outer-product matrix are not clipped i.e. that there are enough bits to store the largest value of any matrix element It is interestmg to consider what happens if we allow these values to be represented by only a few bits. If we consider the case case b = 1, i.e. the weights are clipped at one bit, it is easy to show l7 that r-2llf1tR:Z for the d th older models and for the SOM, which yields y = 0.07 for reasonable R, (this is substantially less than Willsbaw's 0.69). 418 SEQUENCES In an autoassociative memory, the system relaxes to one of the stored patterns and stays fixed in time until a new input is presented. However, there are many problems where the recalled patterns must change sequentially in time. For example, a song can be remembered as a string of notes played in the correct sequence; cyclic patterns of muscle contractions are essential for walking, nding a bicycle, or dribbling a basketball. As a first step we consider the very simplistic sequence production as put forth by Hopfield (1982) and Kanerva (1984). Suppose that we wished to store a sequence of patterns in the SOM. Let the pattern vectors be given by (p 1,p2, ...? pM). This sequence of patterns could be stored by having each pattern point to the next pattern in the se~uence. Thus, for the SOM, the patterns would be stored as mput-output pairs (aIX,dIX), where a =pIX and d lX = pa+l for a = 1,2.3,... ,M -1. Convergence to this sequence works as follows: If the SOM is presented with an address that is close to p i the read data will be close to p2. Iterating the system with p2 as the new input address, the read data will be even closer to p3. As this iterative process continues, the read data will converge to the stored sequence, with the next pattern in the sequence being presented at each time step. The convergence statistics are essentially the same for sequential patterns as that shown above for autoassociative patterns. Presented with pIX as an input address, the signal for the stored sequence is found as before <signal> = Om pIX+l. (21) Thus, given pIX, the read data is expected to be pCX+l. Assuming that the patterns in the sequence are randomly chosen, the mean value of the noise is zero, with variance <a2> = (M-l)82m(I+82(m-l?. (22) Hence, the length of a sequence that can be stored in the SOM increases linearly with m for large m. Attempting to store sequences like this in the Horfield model is not very successful due to the asynchronous updating use in the Hopfield mode. A synchronously updated outer-product model (for example [6]) would work just as described for the SOM, but it would still be limited to storing fraction of the word size as the maximum sequence length. Another method for storing sequences in HOp'field-like networks has been proposed independently by Kleinfeld 22 and Sompolinsky and Kanter. 23 These models relieve the problem created by asynchronous updating by using a time-delayed sequential term. This time-delay storage algorithm has different dynamics than the synchronous SOM model. In the time-delay algorithm, the system allows time for the units to relax to the first pattern before proceeding on to the next pattern. whereas in the synchronous algorithms, the sequence is recalled imprecisely from imprecise input for the first few iterations and then correctly after that. In other words. convergence to the sequence takes place "on the fly" in the synchronous models - the system does not wait to zero in on the fitst pattern before proceeding on to recover the following patterns. 1bis allows the synchronous algorithms to proceed k times as fast as the asynchronous time-delay algorithms with half as many (variable) matrix elements. This difference should be able to be detected in biological systems. TIME DELAYS AND HYSTERESIS: FOLDS The above scenario for storing sequences is inadequate to explain speech recognition or pattern generation. For example, the above algorithm cannot store sequences of the form ABAC ? or overlapping sequences. In Kanerva's original work, he included the concept of time delays as a general way of storing sequences with hysteresis. The problem addressed by this is the following: Suppose we wish to store two sequences of patterns that overlap. For example, the two pattern sequences (a,b,c,d,e,f.... ) and (x,y,z,d,w,v, ... ) overlap at the pattern d. If the system only has knowledge of the present state, then when given the input d, it cannot decide whether to output w or e. To store two such sequences, the system must have some knowledge of the immediate past. Kanerva incoIporates this idea into the SOM by using .. folds." A system with F +1 folds has a time history of F past states. These F states may be over the past F time steps or they may go even further back in time, skipping some time steps. The algorithm for reading from the SOM with folds becomes d(t+l) = g(Co's(t) + C l'S(t-'tl) + ... + C F 'S(t-'tF?, (23) 419 where s(t-'t~= 9r(Aa(t-'t~?. (Pi.pi..... P2 (pl.Pf..... p~l). Tg store the Q pattern sequences 2)?... (P~.pJ .... ,PQCl), construct the matrix of the W" fold as follows: ~ QMtJ C~ = w~IL.p~+lxsa tJ. (24) a.1~1 where any vector with a superscript less than 1 is taken to be zero. s~~" = 9r(Ap~-~"), and w~ is a weighting factor that would normally decrease with increasing ~. Why do 3tese folds work? Suppose that the system is presented with the pattern sequence (pl,pr..... PI I). with each pattern presented sequentially as input until the 'tF time step. For simplicity. assume that w~ = 1 for all~. Each tenn in Eq. (39) will contribute a signal similar to the signal for the single-fOld system. Thus. on the i!" time step. the signal term coming from Eq. (39) is <signal(t+l? = F&nptl. The signal will have this value until the end of the pattern 'The mean of the noise tenns is zero. with variance sequence is reached. <noiseZ:> = F (M -1 )82m (1 +82(m -1?. Hence. the signal-to-noise ratio is ff times as strong as it is for the SDM without folds. Suppose further that the second stored pattern sequence happens to match the first stored sequence at t = 'to The signal term would then be signal(t+l) = F8mp(+"1 + ampr l . (25) With no history of the past (F = 1) the signal is split between p~+l and JJ21+I. and the output is ambiguous. However. for F>I. the signal for the first pattern sequence dominates and allows retrieval of the remainder of the correct sequence. lbis formulation allows context to aid in the retrieval of stored sequences. and can differentiate between overlapping sequences by using time delays. The above formulation is still too simplistic in terms of being able to do real recognition problems such as speech recognition. First. the above algorithm can only recall sequences at a fixed time rate, whereas speech recognition occurs at widely varying rates. Second. the above algorithm does not allow for deletions in the incoming data. For example "seqnce" can be recot nized as "sequence" even though some letters are missing. Third, as pointed out by Lashley speech processing relies on hierarchical structures. Although Kanerva's original algorithm is too simplistic. a straightforward modification allows retrieval at different rates with deletions. To achieve this, we can add on the time-delay terms with weights which are smeared out in time. Kanerva's (1984) formulation can thus be viewed as a discrete-time formulation of that put forth by Hopfield and Tank, (1987).15 Explicitly we could write F h ~ = L. I W~C~s(t-'t~), (26) ~= I A:=~F where the coefficients W ~ are a discrete approximation to a smooth function which spreads the delayed signal out over tlme. As a further step, we could modify these weights dynamically to optimize the signal coming out. The time-delay patterns could also be placed in a hierarchical structure as in the matched filter avalanche structure put forth by Grossberg et al. (1986).26 CORRELATED PAITERNS In the above associative memories. all of the patterns were taken to be randomly chosen. unifonnly distributed binary vectors of length n. However, there are many applications where the set of input patterns is not uniformly distributed; the input patterns are correlated. In mathematical terms, the set K of input patterns would not be uniformly distributed over the entire space of 2/1 possible patterns. Let the probabi1i!)' distribution function for the Hamming distance between two randomly chosen vectors pQ and p~ from the distribution K be given by the function p(d(pQ-p~?, where d(x-y) is the Hamming distance between x and y. 'The SDM can be generalized from Kanerva's original fonnulation so that correlated input patterns can be associated with output patterns. For the moment, assume that the distribution set K and the probability density function p(x) are known a priori. Instead of constructing the rows of the matrix A from the entire space of 2" patterns, construct the rows of A from the distribution 1C. Adjust the Hamming distance r so that ~ = = constant number of locations are selected. am 420 In other words, adjust r so that the value of by a is the same as given above, where a is detennined r [P(X )dx a=--2/1 (27) This implies that r would have to be adjusted dynamical!-y. This could be done, for example, by a feedback loop. Circuitry for doing this is easily built, and a similar structure appears in the Golgi cells in the Cerebellum.27. Using the same distribution for the rows of A as the distribution of the patterns in 1C. and using (27) to specify the choice of r, all of the above analysis is applicable (assuming randomly chosen output patterns). If the outputs do not have equal Is and -Is the mean of the noise is not O. However, if the distribution of outputs is also known, the system can still be made to work by storing IIp+ and IIp_ for Is and -Is respectively, where p? is the probability of getting a 1 or a-I respectively. Using this storage algorithm, all of the above formulas hold, (as long as the distribution is smooth enough and not extremely dense). The SOM will be able to recover data stored with correlated inputs with a fidelity given by Equation (17). What if the distribution function K is not known a priori? In that case, we would need to have the matrix A learn the distribution p(x). There are many ways to build A to mimic p. One such way is to start with a random A matrix and modify the entries of randomly chosen rows of A at each step accordinft!o the statistics of the most recent input patterns. Another method is to use competitive learning 30 to achieve the proper distribution of At. The competitive learning algorithm is a method for adjusting the wei~hts A;j between the first and second layer to match this probability density function, p(x). The i row of the address matrix A can be viewd as a vector A,. The competitive learning algorithm holds a competition between these vectors, and a few vectors that are the closest (within the Hamming sphere r) to the input pattern x are the winners. Each of these winners are then modified slightly in the direction of x. For large eno~ m, this algorithm almost always converges to a distribution of the Aj that is the same as p(x). The updating equation for the selected addresses is just a A;'''"''' = Arid - 'A.{Arld - x) Note for A. = I, this reduces to the so-called unary representation of Baum et al. the maximum efficiency in terms of capacity. (28) 31 Which gives DISCUSSION The above analysis said nothing about the basins of attraction of these memory states. A measure of the perfonnance of a content addressable memory shoUld also say something about the avera~e radius of convergence of the basin of attraction. The basins are in general quite complicated and have been investigated numerically for the unclipped models and values of n and m ranging in the 100S.21 The basins of attraction for the SOM and the d=1 model are very similar in their characteristics and their average radius of convergence. However, the above results give an upper bound on the capacity by looking at the fixed points of the system (if there is no fixed point, there is no basin). In summary, the above arguments show that the total information stored in outer-product neural networks is a constant times the number of connections between the neurons. This constant is independent of the order of the model and is the same (1lIR2b) for the SOM as well as higherorder Hopfield-type networks. The advantage of going to an architecture like the SOM is that the number of patterns that can be stored in the network is independent of the size of the pattern, whereas the number of stored patterns is limited to a fraction of the word size for the Wills haw or Hopfield architecture. The point of the above analysis is that the efficiency of the SOM in terms of information stored per bit is the same as for Hopfield-type models. It was also demonstrated how sequences of patterns can be stored in the SOM, and how time delays can be used to recover contextual information. A minor modification of the SOM could be used to recover time sequences at slightly different rates of presentation. Moreover, another minor modification allows the storage of correlated patterns in the SOM. With these modifications, the SOM presents a versatile and efficient tool for investigating properties of associative memory. 421 Acknowledgements: Discussions with John Hopfield and Pentti Kanerva are gratefully acknowledged. This work: was supported by DARPA contract # 86-A227500-000. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [18] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] REFERENCES McCulloch, W. S. & Pitts, W. (1943), Bull. Math. Biophys. 5, 115-133. Hebb, D. O. (1949) The Organization of Behavior. John Wiley, New York. Anderson, J. A., Solverstein, J. W., Ritz, S. A. & Jones, R. S. (1977) Psych. Rev., 84, 412-451. Hopfield, J. J. (1982) Proc. Natn'l. Acad. Sci. USA 79 2554-2558. Kirkpatrick, S. & Sherringtoo, D. (1978) Phys Rev. 174384-4405. Little, W. A. & Shaw, G. L.(l978)Math. Biosci. 39, 281-289. Nakano, K. (1972), Association - A model of associative memory. IEEE Trans. Sys. Man Cyber.2, Willshaw, D. 1., Buneman, O. P. & Longuet-Higgins, H. c., (1969) Nature, 222 960-962. Lee, Y. c.; Doolen, G.; Chen. H. H.; Sun, G. Z.; Maxwell, T.; Lee, H. Y.; & Giles, L. (1985) Physica , 22D, 276-306. Baldi, P., and Venkatesh. S. S .? (1987) Phys. Rev. Lett. 58, 913-916. Kanerva, P. (1984) Self-propagating Search: A Unified Theory of Memory, Stanford University Ph.D. Thesis, and Bradford Books (MIT Press). In press (1987 est). Chou, P. A., The capacity of Kanerva's Associative Memory these proceedings. McEliece, R. J., Posner, E. C., Rodemich, E. R., & Venkatesh, S. S. (1986), IEEE Trans. on Information Theory. Amit, D. J., Gutfreund, H. & Sompolinsky, H. (1985) Phys. Rev. Lett. 55, 1530-1533. Shannon, C. E., (1948), Bell Syst. Tech. J., 27, 379,623 (Reprinted in Shannon and Weaver 1949) . Kleinfeld, D. & Pendergraft, D. B., (1987) Biophys. J. 51, 47-53. Keeler, J. D. (1986), Comparison of Sparsely Distributed Memory and Hopfield-type Neural Network Models. RIACS Technical Report 86-31, also submitted to J. Cog. Sci. Keeler, J. D. (1987) Physics Letters 124A, 53-58. Abu-Mostafa, Y. & St. Jacques, (1985), IEEE Trans. on Info. Theor., 31, 461. Keeler, J. D .? Basins of Attraction of Neural Network Models AIP Conf. Proc. #151, Ed: John Denker, Neural Networks for Computing, Snowbird Utah, (1986). Peretto, P. & J.J. Niez, (1986) BioI. Cybem., 54. 53-63. Keeler, J. D., Ph. D. Dissertation. Collective phenomena of coupled lattice maps: Reactiondiffusion systems and neural networks. Department of Physics. University of California, San Diego, (1987). Kleinfeld, D. (1986). Proc. Nat. Acad. Sci. 83 9469-9473. Sompolinsky, H. & Kanter, I. (1986). Physical Review Letters. Lashley, K. S. (1951). Cerebral Mechanisms in Behavior. Edited by Jeffress, L. A. Wiley, New York, 112-136. Hopfield, J. 1. & Tank, D. W. (1987). ICNN San Diego preprint. Grossberg, S. & Stone, G. (1986). Psychological Review, 93, 46-74 Marr, D. (1969). A Journal of Phisiology, 202, 437-470. Grossberg, S. (1976). Biological Cybernetics 23, 121-134. Kohonen, T. (1984) Self-organization and associative memory. Springer-Verlag, Berlin. Rumelhart, D. E. & Zipser, D. J. Cognitive Sci .. 9, (1985), 75. Baum, E., Moody J., Wilczek F. (1987). Preprint for Biological Cybernetics
39 |@word middle:2 proportionality:2 contraction:1 simplifying:1 paid:1 versatile:2 moment:1 initial:1 configuration:1 cyclic:1 past:4 ka:1 recovered:1 contextual:1 skipping:1 activation:1 si:1 dx:1 must:4 written:1 john:3 riacs:4 numerical:1 j1:1 hts:1 v:1 alone:2 half:1 selected:3 device:1 tenn:1 sys:1 dissertation:1 jdk:1 contribute:1 ames:2 location:2 lx:1 math:2 mathematical:1 incorrect:1 consists:1 baldi:1 introduce:1 expected:4 behavior:2 pcx:1 ming:1 little:1 pf:2 increasing:1 becomes:5 moreover:2 matched:1 mcculloch:2 what:2 string:3 substantially:1 psych:1 developed:3 gutfreund:1 unified:1 temporal:2 eno:1 exactly:2 willshaw:1 unit:13 normally:1 positive:1 negligible:1 before:6 scientist:1 local:1 modify:2 limit:2 acad:2 approximately:2 ap:1 might:1 black:1 dynamically:1 suggests:1 co:1 limited:4 range:4 bi:1 grossberg:3 crosstalk:1 addressable:1 bell:1 thought:1 log2m:1 alleviated:1 imprecise:1 word:4 wait:1 get:3 cannot:2 close:3 storage:5 context:3 put:3 writing:1 fonnula:1 optimize:1 map:1 demonstrated:1 missing:1 baum:2 go:1 attention:1 straightforward:1 l:1 independently:1 simplicity:1 m2:2 rule:5 attraction:5 higgins:1 ritz:1 doolen:1 marr:1 posner:1 analogous:2 updated:2 pt:1 suppose:8 diego:2 exact:3 us:2 pa:3 element:9 rumelhart:1 recognition:4 walking:1 continues:1 updating:3 sparsely:2 role:1 fly:1 preprint:2 calculate:1 sun:1 sompolinsky:3 decrease:1 edited:1 mentioned:1 ham:1 dynamic:2 neglected:1 solving:1 efficiency:3 easily:3 darpa:1 hopfield:16 represented:1 derivation:1 fast:1 jeffress:1 detected:1 kanter:2 quite:1 stanford:3 widely:1 say:2 s:1 relax:1 statistic:2 asynchronously:1 associative:6 final:1 superscript:1 sequence:39 sdm:14 differentiate:1 net:5 advantage:1 took:1 interaction:6 product:14 coming:2 remainder:2 inserting:1 relevant:1 loop:1 haw:1 kohonen:1 detennined:1 achieve:2 forth:3 competition:1 srj:1 getting:4 convergence:5 converges:1 illustrate:1 oo:1 propagating:1 lbis:1 snowbird:1 minor:3 wished:1 sa:1 strong:1 p2:6 eq:9 c:1 come:2 implies:1 differ:1 direction:1 radius:7 correct:2 filter:1 fix:1 icnn:1 investigation:3 biological:3 theor:1 adjusted:1 keeler:6 pl:4 physica:1 hold:3 considered:2 lpt:1 bicycle:1 pitt:2 lm:1 abumostafa:1 circuitry:2 mostafa:1 a2:1 proc:3 applicable:1 infonnation:1 largest:1 tf:2 tool:1 smeared:1 mit:1 gaussian:1 always:1 modified:2 rather:1 unclipped:1 hj:2 cr:1 imprecisely:1 varying:1 derived:2 focus:2 check:1 indicates:2 tech:1 chou:5 am:1 glass:1 dependent:2 unary:1 entire:3 hidden:6 relation:1 going:1 tank:2 issue:1 among:1 fidelity:10 l7:1 denoted:1 priori:2 field:3 equal:2 construct:2 having:1 hop:1 identical:1 represents:6 look:1 jones:1 nearly:1 mimic:1 report:1 aip:1 few:4 randomly:8 delayed:3 organization:2 adjust:2 kirkpatrick:1 nl:1 tj:1 closer:1 unifonnly:1 perfonnance:1 divide:2 desired:1 uence:1 psychological:1 increased:2 giles:1 lattice:1 bull:1 clipping:1 tg:1 andm:1 entry:1 delay:9 successful:2 examining:1 inadequate:1 too:2 stored:33 spatiotemporal:2 st:1 density:2 stay:1 contract:1 off:2 lee:2 physic:2 aix:1 moody:1 thesis:1 iip:1 conf:1 book:1 american:1 sidestep:1 cognitive:1 syst:1 account:1 chemistry:1 relieve:1 hysteresis:2 coefficient:1 postulated:1 explicitly:3 performed:1 view:1 doing:2 reached:1 start:2 recover:4 competitive:3 complicated:1 avalanche:1 om:1 formed:1 spin:1 ir:1 square:1 variance:11 il:1 characteristic:1 yield:2 cybernetics:2 history:2 submitted:1 explain:1 phys:3 checked:1 synaptic:1 definition:3 ed:1 verse:2 energy:2 james:1 associated:1 hamming:9 adjusting:1 recall:3 lim:2 knowledge:2 pjq:1 actually:1 back:3 nasa:2 appears:1 rodemich:1 maxwell:1 higher:2 specify:1 wei:1 formulation:4 done:3 box:1 though:1 anderson:2 just:6 until:3 mceliece:1 receives:1 wilczek:1 cohn:1 o:4 overlapping:2 kleinfeld:3 mode:1 aj:1 usa:1 effect:1 utah:1 concept:2 y2:2 hence:7 read:6 symmetric:1 avera:1 cerebellum:1 basketball:1 self:2 ambiguous:1 generalized:3 stone:1 demonstrate:1 l1:2 percent:1 ranging:1 physical:1 winner:2 reactiondiffusion:1 cerebral:1 discussed:3 he:1 association:1 numerically:4 accumulate:1 biosci:1 mathematics:1 pm:3 pointed:3 gratefully:1 pq:3 similarity:1 attracting:2 gj:1 add:1 something:1 closest:1 showed:1 recent:1 apart:1 scenario:1 store:15 certain:3 verlag:1 binary:2 tenns:2 continue:1 remembered:1 muscle:1 minimum:1 greater:1 converge:1 signal:32 ii:2 reduces:1 stem:1 hebbian:1 smooth:2 technical:1 match:2 calculation:3 cross:1 sphere:4 retrieval:5 long:1 divided:2 coded:1 calculates:1 buneman:1 variant:1 simplistic:3 denominator:3 essentially:1 expectation:1 iteration:1 cell:1 addition:2 whereas:3 addressed:1 rest:1 fonnulation:1 comment:1 cyber:1 zipser:1 split:1 easy:2 enough:2 relaxes:1 xj:1 independence:1 architecture:2 idea:1 reprinted:1 dribbling:1 synchronous:4 whether:1 expression:4 song:1 speech:4 proceed:1 york:2 jj:2 autoassociative:3 iterating:1 se:1 ph:2 sign:4 jacques:1 per:4 correctly:2 write:4 discrete:2 abu:1 threshold:3 acknowledged:1 changing:1 pj:6 graph:1 fraction:3 year:1 sum:2 pix:4 letter:3 clipped:3 place:1 reasonable:1 decide:1 almost:1 p3:1 bit:31 layer:14 bound:2 pay:1 played:1 fold:8 activity:1 strength:2 constrain:1 argument:1 extremely:1 attempting:1 department:2 according:2 slightly:2 rev:4 modification:5 making:1 happens:2 pr:1 taken:2 kanerva:14 equation:5 ln:1 agree:1 mechanism:2 fed:1 end:1 denker:1 hierarchical:2 away:4 shaw:1 original:3 denotes:1 binomial:1 log2:1 nakano:1 uj:3 build:1 amit:1 tensor:4 added:1 occurs:1 said:1 distance:7 higherorder:1 separating:1 capacity:24 sci:4 outer:13 berlin:1 mail:1 assuming:5 length:9 relationship:1 ratio:4 mostly:2 info:2 lashley:2 nmll:1 binomially:1 proper:1 collective:1 upper:2 disagree:1 neuron:12 peretto:2 vertical:1 sm:3 finite:1 immediate:1 looking:1 synchronously:1 venkatesh:2 pair:1 connection:16 recalled:6 california:1 deletion:2 trans:3 address:11 able:3 below:2 pattern:78 dynamical:2 xm:1 reading:3 built:2 including:1 memory:19 overlap:6 weaver:1 mn:1 representing:1 older:1 numerous:1 axis:3 nding:1 carried:1 created:1 coupled:1 review:2 acknowledgement:1 tantamount:1 tjj:1 limitation:3 proportional:2 generation:1 moffett:1 ixl:1 basin:7 storing:7 pi:8 playing:1 production:1 row:6 summary:1 placed:1 supported:1 keeping:1 asynchronous:3 allow:3 institute:1 niez:1 sparse:3 distributed:8 feedback:2 overcome:1 xn:1 valid:3 lett:2 forward:1 made:2 qualitatively:1 san:2 tlme:1 approximate:4 keep:1 lir:1 decides:1 sequentially:2 incoming:1 investigating:1 summing:1 cybem:1 search:1 iterative:2 why:1 nature:1 learn:2 channel:1 mj:1 ca:2 longuet:1 correlated:9 investigated:3 som:25 constructing:1 spread:1 dense:1 linearly:1 s2:1 noise:17 nothing:1 fair:1 allowed:1 neuronal:1 je:1 tl:1 ff:1 hebb:2 aid:1 wiley:2 precision:2 wish:2 lie:1 governed:1 pe:8 third:3 weighting:1 formula:4 cog:1 s2m:1 r2:4 dominates:1 essential:1 sequential:2 effectively:1 gained:1 magnitude:2 ampr:1 nat:1 biophys:2 chen:1 simply:1 expressed:1 springer:1 aa:1 relies:1 bioi:1 viewed:3 presentation:1 price:2 man:1 content:1 change:2 abac:1 determined:1 typical:1 included:1 uniformly:2 total:7 called:2 pentti:1 bradford:1 shannon:3 est:1 select:7 mput:1 evaluate:1 fitst:1 phenomenon:1 ex:3
3,201
390
Speech Recognition using Connectionist Approaches Khalid Choukri SPRINT Coordinator CAP GEMINI INNOVATION 118 rue de Tocqueville, 75017 Paris. France e-mail: [email protected] Abstract This paper is a summary of SPRINT project aims and results. The project focus on the use of neuro-computing techniques to tackle various problems that remain unsolved in speech recognition. First results concern the use of feedforward nets for phonetic units classification, isolated word recognition, and speaker adaptation. 1 INTRODUCTION Speech is a complex phenomenon but it is useful to divide it into levels of representation. Connectionism paradigms and particularities are exploited to tackle the major problems in relationship with intra and inter speaker variabilities in order to improve the recognizer performance. For that purpose the project has been split into individual tasks which are depicted below: I..-._S_ign_al_-,H Param...,. H Phon.tie H Lexieon The work described herein concerns : ? Parameters-to-Phonetic: Classification of speech parameters using a set of "phonetic" symbols and extraction of speech features from signal. ? Parameters-to-Lexical: Classification of a sequence of feature vectors by lexical access (isolated word recognition) in various environments. ? Parameters-to-Parameters: Adaptation to new speakers and environments. 262 Speech Recognition using Connectionist Approaches The following sections summarize the work carried out within this project. Details, including different nets description, are reported in the project deliverables (Choukri, 1990), (Bimbot, 1990), (Varga, 1990). 2 PARAMETERS-TO-PHONETIC The objectives of this task were to assess various neural network topologies, and to examine the use of prior knowledge in improving results, in the process of acousticphonetic decoding of natural speech. These results were compared to classical pattern classification approaches such as k-nearest neighbour classifiers (K-nn), dynamic programming, and k-means. 2.1 DATABASES The speech was uttered by one male speaker in French. Two databases were used: DB_1 made of isolated non-sense words (logatomes) which contains 6672 phonemes and DB_2 provided by the recording of 200 sentences which contains 5270 phonemes. DB_2 was split equally into training and test sets (2635 data each). 34 different labels were used: 1 per phoneme (not per allophone) and one for the silence. For each phoneme occurrence, 16 frames of signal (8 on each side of the label) were processed to provide a 16 Mel-scaled filter-bank vector. 2.2 CLASSICAL CLASSIFIERS Experiments using k-NN and k-means classifiers were conducted to check the sufficient consistency of the data and to have some reference scores. A first protocol considered each pattern as a 256-dimension vector, and achieved k-nearest neighbours with the euclidean distance between references and tests. A second protocol attempted to decrease the time misalignments influences by carrying out some Dynamic Time Warping between references and tests and taking the sum of distances along the best path, as a distance measure between patterns. The same data was used in the framework of a k-means classifier, for various values of k (number of representatives per class). The best results are : Method Score K-means (K > 16) 61.3 % K-nn (K=5) 72.2 % K-nn + DTW (K=5) 77.5 % 2.3 NEURAL CLASSIFIERS 2.3.1 LVQ Classifiers Experiments were conducted using Learning Vector Quantization technique (LVQ) (Bennani, 1990). A study of the importance of the weights initialization procedure proved to be an important parameter for the classification performance. We have compared three initialization algorithms: k-means, LBG, Multiedit. With k-means and LBG, tests were conducted with different numbers of reference vectors, while for Multiedit, the algorithm discovers automatically representative vectors in the 263 264 Choukri training set, the number of which is therefore not specified in advance. Initialization by LBG gave better performance for self-consistency (evaluation on the training database: DB_I) , whereas test performance on DB_2 (sentences) were similar for all procedures and very low. Further experiments were carried out on DB_2 both for training and testing. LBG initialization with 16 and 32 classes were tried (since they gave the best performances in the previous experiment). Even though the self-consistency for sentences is slightly lower than the one for logatomes, the improvement of recognition scores are far better as illustrated here: nb ref per class K-means LBG -+ LVQ 16 60.3 62.4 % -+ 32 61.3 % 66.1 % 63.2 % -+ % 67.2 % This experiment and some others (not presented here) (Bimbot, 1990) confirm that the failure of previous experiments is more due to a mismatch between the corpora for this recognition method, than an inadequacy of the classification technique itself. 2.3.2 The Time-Delay Neural Network (TDNN) Classifiers A TDNN, as introduced by A. Waibel (Waibel, 1987), can be described by its set of typological parameters, i.e. : MoxNo/Po,So - M1xN1/P1,Sl - M2XN2 - Kx1. In the following a "TDNN-derived" network has a similar architecture, except that M2 is not constrained to be equal to K, and the connectivity between the last 2 layers is full. Various TDNN-derived architectures were tested on recognizing phonemes from sentences (DB_2) after learning on the logatomes (DB_I). Best results are given below: TDNN-derived structure 16x16/ 2,1 - 8x15 / 7,2 - 5x5 - 34x1 16x16 / 2,1 - 16x15 / 7,4 - llx3 - 34x1 16x16 / 4,1 - 16x13 / 5,2 - 16x5 - 34x1 16x16 / 2,1 - 16x15 / 7,4 - 16x3 - 34x1 self-consist. 63.9 % 75.1 % 81.0 % 79.8 % reco score 48.1 % 54.8 % 60.5 % 60.8 % The first net is clearly not powerful enough for the task, so the number of free parameters has be increased. This upgraded the results immediately as can be seen for the other nets. The third and fourth nets have equivalent performance, they differ in the local windows width and delays. Other tested architectures did not increase this performance. The main difference between training and test sets is certainly the different speaking rate, and therefore the existence of important time distorsions. Though TDNN-derived architectures seem more able to handle this kind of distorsions than LVQ, as the generalization performance is significantly higher for similar learning self-consistency, but both fail to remove all temporel misalignment Speech Recognition using Connectionist Approaches effects. In order to upgrade classification performance we changed the cost function which is minimized by the network: the error term corresponding to the desired output is multiplied by a constant H superior to 1, the terms of the error corresponding to other outputs being left unchanged to compensate the deficiency of the simple mean square error procedure. We obtained our best results with the best TDDN-derived net we experimented for H=2 : Database DB_l DB_2 Net: 16x16 / 4,1 - 16x13 / 5,2 - 16x5 - 34xl 16x16 / 4,1 - 16x13 / 5,2 - 16x5 - 34xl self-consist. 87.0 % 87.0 % reco score 63.0 % 78.0 % The too small number of independent weights (too low-dimensioned TDNN-derived architecture) makes the problem too constrained. A well chosen TDNN-derived architecture can perform as well as the best k-nearest neighbours strategy. Performance gets lower for data that mainly differ by a significant speaking rate mismatch which could indicate that TDNN-derived architectures do not manage to handle all kinds of time distortions. So it is encouraging to combine different networks and classical methods to deal with the temporal and sequential aspects of speech. 2.3.3 Combination of TDNN and LVQ A set of experiments using a combined TDNN-derived network and LVQ architecture were conducted. For these experiments, we have used the best nets found in previous experiments. The main parameter of these experiments is the number of hidden cells in the last layer of the TDNN-derived network which is the input layer of LVQ (Bennani, 1990). Evaluation on DB_l with various numbers of references per class gave the following recognition scores: refs per class TDNN +k-means TDNN +LBG TDNN +LVQ (LBG for initialization) 4 76.2 % 77.7 % 78.4 % 8 78.1 % 79.9% 82.1 % 16 79.8 % 81.3 % 81.4 % Best results have been obtained with 8 references per class and the LBG algorithm to initialize the LVQ module. The best performance on the test set (82.1 % ) represents a significant increase (4 % ) compared to the best TDNN-derived network. Other experiments were performed on TDNN + LVQ by using a modified LVQ architecture, presented in (Bennani, 1990), which is an extension of LVQ built to automatically weight the variables according to their importance for the classification. We obtain a recognition score of 83.6 % on DB_2 (training and tests on sentences) . We also used low dimensioned TDNNs for discriminating between phonetic features (Bimbot, 1990), assuming that phonetics will provide a description of speech that will appropriately constrain a priori a neural network, the TDNN structure war- 265 266 Choukri ranting the desirable property of shift invariance. The feature extraction approach can be considered as an other way to use prior knowledge for solving a complex problem with neural networks. The results obtained in these experiments are an interesting starting point for designing a large modular network where each module is in charge of a simple task, directly related to a well-defined linguistic phenomenon (Bimbot, 1990). 2.4 CONCLUSIONS Experiments with LVQ alone, a TDNN-derived network alone and combined TDNNLVQ architectures proved the combined architecture to be the most efficient with respect to our databases as summarized below (training and tests on DB_2): k-means 61.3 % 3 LVQ 67.2 % k-nn k-nn + DTW 77.5 % 72.2 % TDNN 78.0 % TDNN + LVQ 83.6 % PARAMETERS-TO-LEXICAL The main objective of this task is to use neural nets for the classification of a sequence of speech frames into lexical items (isolated words). Many factors affect the performance of automatic speech recognition systems. They have been categorized into those relating to speaker independent recognition mode, the time evolution of speech (time representation of the neural network input), and the effects of noise. The two first topics are described herein while the third one is described in (Varga, 1990). 3.1 USE OF VARIOUS NETWORK TOPOLOGIES Experiments were carried out to examine the performance of several network topologies such as those evaluated in section 2. A TDNN can be thought of as a single Hidden Markov Model state spread out in time. The lower levels of the network are forced to be shift-invariant, and instantiate the idea that the absolute time of an event is not important. Scaly networks are similar to TDDNs in that the hidden units of a scaly network are fed by partially overlapping input windows. As reported in previous sections, LVQ proved to be efficient for the phoneme classification task and an "optimal" architecture was found as a combination of a TDNN and LVQ. It was used herein for isolated word recognition. From experiments reported in detail in (Varga, 1990) there seems little justification for fully-connected networks with their thousands of weights when TDNNs and Scaly networks with hundreds of weights have very similar performance. This performance is about 83 % (the nearest class mean classifier gave a performance of 69%) on the E-set database (a portion of the larger CONNEX alphabet database which British Telecom Research Laboratories have prepared for experiments on neural networks). The first utterance by each speaker of the "E" words: "B, C, D, Speech Recognition using Connectionist Approaches E, G, P, T, V'" were used. The database is divided into training and test sets, each consisting of approximately 400 words and 50 speakers. Other experiments were conducted on an isolated digits recognition task, speaker independent mode (25 speakers for training and 15 for test), using networks already introduced. A summary of the best performance obtained is: K-means train. test 97.38 90.57 TDNN train. test 98,90 94.0 LVQ train. test 98.26 92.57 TDNN+LVQ train. test 99.90 97.50 Performance for training is roughly equivalent for all algorithms. For generalization, performance of the combined architecture is clearly superior to other techniques. 3.2 TIME EVOLUTION OF SPEECH In contrast to images as patterns of specific size, speech signals display a temporal evolution. Approaches have to be developed on how a network with its fixed number of input units can cover word patterns of variable size and also account for the dynamic time variations within words. Different projections onto the fixed-size collection of NxM network input elements (number of vectors x number of coefficients per vector) have been tested, such as : Linear Normalization: the boundaries of a word are determined by a conventional endpoint detection algorithm and the N' feature vectors linearly compressed or expanded to N by averaging or duplicating vectors, Time Warp: word boundaries are located initially. Some parts of a word of length N' are compressed, while others are stretched and some remain constant with respect to speech characteristics, Noise Boundaries: the sequence of N' vectors of a word are placed in the middle of or at random within the area of the desired N vectors and the margins padded with the noise in the speech pauses, Trace Segmentation: the procedure essentially involves the division of the trace that is followed by the temporal course in the M-dimensional feature vector space, into a constant number of new sections of identical length. These time normalization procedures were used with the scaly neural network (Varga, 1990). It turned out that three methods for time representation - time normalization, trace segmentation with endpoint detection or with noise boundaries - are well suited to solve the transformation problem for a fixed input network layer. The recognition scores are in the 98.5% range (with?l% deviation) for 10 digits and 99.5% for a 57 words in speaker independent mode. There is no clear indication that one of these approaches is superior to the other ones. 3.3 CONCLUSIONS The neural network techniques investigated have delivered comparable performance to classical techniques. It is now well agreed that Hybrid systems (Integration of 267 268 Choukri Hidden Markov Modeling and MLPs) yield enhanced performance. Initial steps have been made towards the integration of Hidden Markov Models and MLPs. Mathematical formulations are required to unify hybrid models. The temporal aspect of speech has to be carefully considered and taken into account by the formalism. 4 PARAMETERS-TO-PARAMETERS The main objective of this task was to provide the speech recognizer with a set of parameters adapted to the current user without any training phase. Spectral parameters corresponding to the same sound uttered by two speakers are generally different. Speaker-independent recognizers usually take this variability into account, using stochastic models and/or multi-references. An alternative approach consists in learning spectral mappings to transform the original set of parameters into another one more adapted with respect to the characteristics of the current user and the speech acquisition conditions. The way to proceed can be summed up as follows: ? Load of the standard dictionary of the reference speaker, ? Acquisition of an adaptation vocabulary for the new speaker, ? Each new utterance is time-warped against the corresponding reference utterance. Thus temporal variability is softened and corresponding feature vectors are available (input-output pairs), ? The spectral transformations are learned from these associated vectors, ? The adaptation operator is applied to the reference dictionary, leading to an adapted one, ? The recognizer is evaluated using the obtained adapted dictionary. The mathematical formulation is based on a very important result, regarding inputoutput mappings, and demonstrated by Funahashi (Funahashi, 1989) and Hornik, Stinchcombe & White (Hornik, 1989). They proved that a network using a single hidden layer (a net with 3 layers) with an arbitrary squashing function can approximate any Borel measurable function to any desired degree of accuracy. Experiments were conducted (see details in (Choukri, 1990)) on a speech isolated word database consisting of 20 English words recorded 26 times by 16 different speakers (TI data base (Choukri, 1987)). The first repetition of the 20 words are reference templates, tests are conducted on the remaining 25 repetitions. Before adaptation, the cross-speaker scores is of 68%. On the average adaptation with the multi-layer perceptron provides a 15% improvement compared to the non-adapted results. Speech Recognition using Connectionist Approaches 5 CONCLUSIONS For phonetic classifications, sophisticated networks, combinations of TONNs and LVQ, revealed to be more efficient than classical approaches or simple network architectures; their use for isolated word recognition offered comparable performance. Various approaches to cope with temporal distortions were implemented and demonstrate that combination of sophisticated neural networks and their cooperation with HMM is a promising research axis. It has also been established that basic MLPs are efficient tools to learn speaker-to-speaker mappings for speaker adaptation procedures. We are expecting more sophisticated MLPs (recurrent and context sensitive) to perform better. Acknowledgements: This project is partially supported by the European ESPRIT Basic research Actions programme (BRA 3228). The partners involved are: CGInn (F), ENST (F), IRIAC (F), RSRE (UK), SEL (FRG), and UPM (SPAIN). References K. Choukri. (1990) Speech processing and recognition using integrated neurocomputing techniques: ESPRIT Project SPRINT (Bra 9ffB), First delitJerable of Task f, June 1990. F. Bimbot. (1990) Speech processing and recognition using integrated neurocomputing techniques: ESPRIT project SPRINT (Bra 9ffB), First delitJerable of task 9, June 1990. A. Varga. (1990) Speech processing and recognition using integrated neurocomputing techniques: ESPRIT Project SPRINT (Bra 9ff8), First delitJerable of Task S, June 1990. A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang. (1987) Phoneme recognition using Time-Delay Neural Networks., Technical Report, CMU / ATR, Oct 30, 1987. Y. Bennani, N. Chaourar, P. Gallinari, and A. Mellouk. (1990) Comparison of Neural Net models on speech recognition tash, Technical Report, Universit of Paris Sud, LRl, 1990. Ken-Ichi Funahashi. (1989) On the approximate realization of continuous mappings by neural networks, in Neural Networks, 2(2):183-192, march 1989. K. Hornik, M. Stinchcombe, and H. White. (1989) Multilayer feedforward networks are unitJersal approximators., in Neural Networks, vol. 2(number 5):359-366, 1989. K. Choukri. (1987) SetJerai approaches to Speaker Adaptation in Automatic Speech Recognition Systems, PhD thesis, ENST (Telecom Paris), Paris, 1987. AUTHORS AND CONTRIBUTORS Y. BENNANI K. CHOUKRI D. HOWELL A. MELLOUK H.VALBRET F. BIMBOT L. DODD M. IMMENDORFER c. MONTACIE A.VARGA J. BRIDLE F. FOGELMAN A. KRAUSE R.MOORE A. WALLYN N.CHAOURAR P. GALLINARI K. McNAUGHT O. SEGARD 269 Part VI Signal Processing
390 |@word middle:1 seems:1 tried:1 multiedit:2 initial:1 contains:2 score:9 current:2 lang:1 remove:1 alone:2 instantiate:1 item:1 funahashi:3 provides:1 mathematical:2 along:1 consists:1 combine:1 inter:1 roughly:1 p1:1 examine:2 multi:2 sud:1 automatically:2 encouraging:1 little:1 param:1 window:2 project:9 provided:1 spain:1 kind:2 developed:1 transformation:2 temporal:6 duplicating:1 choukri:11 charge:1 tackle:2 ti:1 tie:1 esprit:4 classifier:8 scaled:1 uk:1 gallinari:2 unit:3 universit:1 before:1 local:1 path:1 approximately:1 initialization:5 upgraded:1 range:1 testing:1 x3:1 dodd:1 digit:2 procedure:6 area:1 significantly:1 thought:1 projection:1 word:18 get:1 onto:1 acousticphonetic:1 operator:1 nb:1 context:1 influence:1 measurable:1 equivalent:2 conventional:1 demonstrated:1 lexical:4 uttered:2 starting:1 upm:1 unify:1 immediately:1 m2:1 handle:2 variation:1 justification:1 enhanced:1 user:2 programming:1 designing:1 element:1 recognition:24 located:1 database:9 bennani:5 module:2 thousand:1 connected:1 decrease:1 expecting:1 environment:2 dynamic:3 carrying:1 solving:1 division:1 misalignment:2 po:1 various:8 alphabet:1 train:4 forced:1 modular:1 larger:1 solve:1 distortion:2 particularity:1 compressed:2 transform:1 itself:1 hanazawa:1 delivered:1 sequence:3 indication:1 net:11 fr:1 adaptation:8 turned:1 realization:1 kx1:1 description:2 inputoutput:1 recurrent:1 connex:1 nearest:4 reco:2 implemented:1 involves:1 indicate:1 differ:2 filter:1 stochastic:1 frg:1 generalization:2 sprint:5 connectionism:1 extension:1 considered:3 mapping:4 major:1 dictionary:3 purpose:1 recognizer:3 label:2 sensitive:1 contributor:1 repetition:2 tool:1 clearly:2 aim:1 modified:1 sel:1 linguistic:1 derived:12 focus:1 june:3 improvement:2 check:1 mainly:1 contrast:1 sense:1 nn:6 integrated:3 initially:1 hidden:6 coordinator:1 france:1 fogelman:1 classification:11 priori:1 constrained:2 integration:2 initialize:1 summed:1 equal:1 extraction:2 identical:1 represents:1 minimized:1 connectionist:5 others:2 report:2 neighbour:3 neurocomputing:3 individual:1 phase:1 consisting:2 detection:2 khalid:1 intra:1 evaluation:2 certainly:1 male:1 divide:1 euclidean:1 montacie:1 desired:3 isolated:8 increased:1 formalism:1 modeling:1 cover:1 cost:1 deviation:1 hundred:1 delay:3 recognizing:1 conducted:7 too:3 reported:3 combined:4 phon:1 discriminating:1 decoding:1 connectivity:1 thesis:1 recorded:1 manage:1 warped:1 leading:1 account:3 de:1 summarized:1 coefficient:1 vi:1 performed:1 portion:1 ass:1 square:1 mlps:4 accuracy:1 phoneme:7 characteristic:2 yield:1 upgrade:1 typological:1 howell:1 failure:1 against:1 acquisition:2 involved:1 associated:1 unsolved:1 bridle:1 proved:4 knowledge:2 cap:1 x13:3 segmentation:2 agreed:1 carefully:1 sophisticated:3 higher:1 formulation:2 evaluated:2 though:2 overlapping:1 french:1 mode:3 effect:2 evolution:3 laboratory:1 moore:1 illustrated:1 deal:1 white:2 x5:4 self:5 width:1 speaker:20 mel:1 demonstrate:1 phonetics:1 image:1 discovers:1 superior:3 endpoint:2 relating:1 significant:2 stretched:1 automatic:2 consistency:4 access:1 recognizers:1 base:1 phonetic:6 approximators:1 exploited:1 seen:1 bra:4 paradigm:1 signal:4 full:1 desirable:1 sound:1 technical:2 cross:1 compensate:1 divided:1 equally:1 neuro:1 basic:2 multilayer:1 essentially:1 cmu:1 normalization:3 achieved:1 tdnns:2 cell:1 whereas:1 krause:1 appropriately:1 recording:1 seem:1 feedforward:2 split:2 enough:1 revealed:1 affect:1 gave:4 architecture:14 topology:3 idea:1 regarding:1 shift:2 war:1 inadequacy:1 speech:29 proceed:1 speaking:2 rsre:1 action:1 useful:1 varga:6 clear:1 generally:1 prepared:1 processed:1 ken:1 sl:1 per:8 vol:1 ichi:1 bimbot:6 padded:1 gemini:1 sum:1 powerful:1 fourth:1 comparable:2 layer:7 followed:1 display:1 adapted:5 deficiency:1 constrain:1 aspect:2 expanded:1 softened:1 according:1 waibel:3 combination:4 march:1 remain:2 slightly:1 enst:2 invariant:1 taken:1 nxm:1 fail:1 fed:1 available:1 multiplied:1 spectral:3 occurrence:1 alternative:1 existence:1 original:1 remaining:1 classical:5 unchanged:1 warping:1 objective:3 already:1 strategy:1 distance:3 atr:1 hmm:1 topic:1 mail:1 partner:1 assuming:1 length:2 relationship:1 innovation:1 ffb:2 trace:3 perform:2 markov:3 hinton:1 variability:3 frame:2 arbitrary:1 introduced:2 pair:1 paris:4 specified:1 required:1 sentence:5 learned:1 herein:3 established:1 able:1 below:3 pattern:5 mismatch:2 usually:1 distorsions:2 summarize:1 built:1 including:1 stinchcombe:2 event:1 natural:1 hybrid:2 pause:1 improve:1 dtw:2 axis:1 carried:3 tdnn:25 utterance:3 prior:2 acknowledgement:1 fully:1 interesting:1 degree:1 offered:1 sufficient:1 bank:1 squashing:1 course:1 cooperation:1 summary:2 changed:1 placed:1 last:2 free:1 english:1 supported:1 silence:1 side:1 warp:1 perceptron:1 template:1 taking:1 absolute:1 boundary:4 dimension:1 vocabulary:1 author:1 made:2 collection:1 programme:1 far:1 cope:1 approximate:2 confirm:1 corpus:1 shikano:1 continuous:1 promising:1 learn:1 hornik:3 improving:1 investigated:1 complex:2 european:1 rue:1 protocol:2 did:1 main:4 spread:1 linearly:1 noise:4 ref:2 categorized:1 x1:4 representative:2 telecom:2 borel:1 x16:6 lbg:8 xl:2 third:2 x15:3 british:1 load:1 specific:1 symbol:1 experimented:1 concern:2 consist:2 quantization:1 sequential:1 importance:2 phd:1 margin:1 suited:1 depicted:1 lrl:1 partially:2 oct:1 lvq:20 towards:1 determined:1 except:1 averaging:1 invariance:1 attempted:1 tested:3 phenomenon:2
3,202
3,900
SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system Sylvain Chevallier TAO, INRIA-Saclay Univ. Paris-Sud F-91405 Orsay, France [email protected] H?el`ene Paugam-Moisy LIRIS, CNRS Univ. Lyon 2 F-69676 Bron, France [email protected] Mich`ele Sebag TAO, LRI ? CNRS Univ. Paris-Sud F-91405 Orsay, France [email protected] Abstract Many complex systems, ranging from neural cell assemblies to insect societies, involve and rely on some division of labor. How to enforce such a division in a decentralized and distributed way, is tackled in this paper, using a spiking neuron network architecture. Specifically, a spatio-temporal model called SpikeAnts is shown to enforce the emergence of synchronized activities in an ant colony. Each ant is modelled from two spiking neurons; the ant colony is a sparsely connected spiking neuron network. Each ant makes its decision (among foraging, sleeping and self-grooming) from the competition between its two neurons, after the signals received from its neighbor ants. Interestingly, three types of temporal patterns emerge in the ant colony: asynchronous, synchronous, and synchronous periodic foraging activities ? similar to the actual behavior of some living ant colonies. A phase diagram of the emergent activity patterns with respect to two control parameters, respectively accounting for ant sociability and receptivity, is presented and discussed. 1 Introduction The emergence of organization is at the core of many complex systems, from neural cell assemblies to living insect societies. For instance, the emergence of synchronized rhythmical activity has been observed in many social insect colonies [2, 4, 5, 7], where synchronized patterns of activity may indeed contribute to the collective efficiency in various ways. But how do ants proceed to temporally synchronize their activity? As suggested by Cole [4], the synchronization of activity is a consequence of temporal coupling between individuals. It thus comes naturally to investigate how spiking neuron networks (SNNs), also based on temporal dynamics, enable to model the emergence of collective phenomena, specifically synchronized activities, in complex systems. The reader?s familiarity with SNNs, inspired from the mechanisms of information processing in the brain, is assumed in the following, referring to [18] for a comprehensive presentation. 1.1 Related work In computational neuroscience, SNNs are well known for generating a rich variety of dynamical patterns of activity, e.g. synchrony of cell assemblies [9], complete synchrony [17], transient synchrony [10], order-chaos phase transition [20] or polychronization [11]. For instance, a mesoscopic model [3] explains the emergence of a rhythmic oscillation at the network level, resulting from the competition of excitatory and inhibitory connections between neurons. In computer science, the field of reservoir computing (RC) [13] focuses on analyzing and exploiting the echos generated by external inputs in the dynamics of sparse random networks. The proposed SpikeAnts model features one distinctive characteristics compared to the state of the art in RC and SNNs: its only aim is to 1 model an emergent property in a complex closed system; it does neither receive any external inputs nor involve any learning rule. To our best knowledge, current models of emergence are mostly based on statistical physics, involving differential equations and mean field approaches [19], or mathematics and computer science, using random Markov fields, cellular automata or multi-agent systems. 1.2 Target of the SpikeAnts model The SpikeAnts model implements a distributed decision making process in a population of agents, say an ant colony. The phenomenon to analyze is the division of labor. The model relies on the spatio-temporal interactions of spiking neurons, where each ant agent is accounted for by two neurons. A simplified scheme is proposed, inspired from [2] and [16]: Each agent may be in one out of four states, Observing, Foraging, Sleeping or self-Grooming (Fig. 1). The interactions take place during the observation round. Each agent a observes its environment and if it perceives none or too few working agents, a goes foraging for a given time and eventually goes to sleep. Otherwise, if a perceives ?sufficiently many? agents engaged in foraging, it goes back to the nest for less vital tasks (the grooming state) before returning to observation after a while. Each state lasts for a fixed duration (resp. tO , tF , tS and tG ), with an exception for the observation state. The observation period is only subject to an upper bound tO . If the agent sees sufficiently many other foraging ants before the end of the observation period, it can switch at once to the self-grooming state. F oraging S leeping (long) or G rooming (short) G O F S Observation time Figure 1: (Left) Transitions between the four agent states: Grooming, Observing, Foraging and Sleeping states. Black arrows denote transitions and the dotted arrow indicates an inhibitory message. (Right) An example of agent schedule. The agent decisions only depend on the information exchanged between them, through agent neurons sending spikes to (respectively, receiving spikes from) other agents in the population. It must be emphasized that the proposed decision process does not assume the agent ability to ?count? (here the number of its foraging neighbors). In the meanwhile, this process is deterministic, contrasting with the threshold-based probabilistic models used in [1, 2, 7]. 2 The SpikeAnts spiking neuron network This section describes the structure of the SpikeAnts model. Each ant agent is modelled by two spiking neurons. Any two agents (i, j) are connected with an average density ? (0 6 ? 6 1). The ant colony thus defines a sparsely connected network of spiking neurons, referred to as SNN. 2.1 Spiking neuron models An agent is modelled by two coupled spiking neurons, respectively a Leaky Integrate-and-Fire (LIF) neuron [6, 14] and a Quadratic Integrate-and-Fire (QIF) neuron [8, 15]. These models of neuron are biologically plausible and they have been thoroughly studied. We shall show that their coupling achieves a frugal control of the agent behavior. A LIF neuron fires a spike if its potential Vp exceeds a threshold ?. Upon firing a spike, Vp is reset to Vreset . Formally:  dV p if Vp < ? , dt = ??(Vp (t) ? Vrest ) + Iexc (t), (1) p else fires a spike and Vp is set to Vreset where ? is the relaxation constant. Iexc (t) models instantaneous synaptic interactions. Let Pre denote the set of presynaptic neurons (such that there exists an edge from every neuron in Pre and 2 the current neuron), and let Traini denote the spike trains of the ith neuron in Pre; then, X X ?(t ? tij ), Iexc (t) = w (2) i?Pre j?Traini where w is a synaptic weight controlling the dynamics of the SNN (more in section 3.1), ?(.) is Dirac distribution and tij is the firing time of the j th spike from the ith presynaptic neuron. The QIF neuron is described by the evolution of the potential Va , compared to the resting potential Vrest and an internal threshold Vthres . Additionally, it receives an internal signal Iclock modelling a gap junction connection:  dV a dt = ?(Va (t) ? Vrest )(Va (t) ? Vthres ) + Iinh (t) + Iclock (t), if Va < ? . (3) a else fires a spike and Va is set to Vreset a Depending on whether the reset threshold is greater than the internal threshold (Vreset > Vthres ), the a QIF neuron is bistable [12], which motivated the choice of this neuron model. If Vreset < Vthres , the membrane potential Va stabilizes on Vrest when there is no external perturbation, and the neuron a thus exhibits an integrator behavior. When Vreset > Vthres , the neuron displays a bursting behavior and fires periodically. 2.2 The ant agent model Each SpikeAnts agent mimics an ant. Its behavior is controlled after the competition between two coupled spiking neurons, an active one (QIF, Eq. (3)) and a passive one (LIF, Eq. (1)). The agent additionally involves an internal unit providing the Iclock signal. During the observation round, the ant makes its decision (whether it goes foraging) based on the competition between its active and passive neurons (Fig. 2). Both neurons are aware of the foraging neighbor ants. The signal emitted by these neighbors is an excitatory signal (respectively an inhibitory signal) for the passive (resp. active) neuron: Iinh (t) = ?Iexc (t). The active neuron additionally receives the excitatory signal Iclock (t) of the internal clock unit. In the case where the ant agent does not see too many foraging ants, the internal excitatory signal Iclock (t) dominates the inhibitory signal Iinh (t), the active neuron fires first and drives the ant to Active neuron Passive neuron Membrane potential (mV) 1.5 ? 1 0.5 0 20 40 60 Time (ms) 80 100 120 State 0 S O F S O G O F S Figure 2: Membrane potentials of active (in dark/red) and passive (in grey/green) neurons. The dashed line indicates the threshold ?. The first observation state starts at 20ms: the active neuron fires before the passive one, the agent thus goes foraging and the active neuron continues sending spikes during the whole foraging period (signalling its foraging behavior to other agents). After a sleep period (from circa 50 to 70ms), starts a second observation round. This time the passive neuron fires before the active one. The agent thus goes self-grooming, and switches to the observation state thereafter. During the last observation round, the active neuron wins again against the passive one, and the agent goes foraging. 3 foraging (first and last episode in Fig. 2). When foraging, the active neuron enters in a bursting phase and periodically sends a spike to the ant neighbors. Note that these spikes are only meaningful for the ants in observation state. After a foraging period (duration tF ), the ant goes to sleep (duration tS ). The sleeping state is triggered by a delayed connection between the internal unit and the active neuron. Quite the contrary, if the ant sees many other foraging ants, the excitatory signal Iexc (t) drives the passive neuron to fire before the active one (second episode in Fig. 2), and the ant accordingly sets in a self-grooming state (duration tG ). The decision making of the ant agent thus relies on the competition between its active and passive neurons. In particular, the number of spikes needed for an ant to go foraging or self-grooming depends on the temporal dynamics of the system; it varies from one observation episode to another. After some rest (self-grooming or sleeping states, with respective durations tG and tS , tG < tS ), the ant returns to the observation state. As above-mentioned, incoming spikes are only relevant to the active and passive neurons of an observing ant. During the foraging and resting states, presynaptic spikes have no influence, which can be thought of as an intrinsic plasticity mechanism [21] driven by the internal unit. The internal unit can indeed be seen as the ant biological clock. In a further model, it will be replaced by a neural group interacting with active and passive neurons through intrinsic plasticity, e.g. using a transient increase of ? for LIF and QIF neurons. 2.3 Model parameters Overall, the SpikeAnts model is controlled by three types of parameters, respectively related to spiking neuron models, to ant agents (state durations) and to the whole population (size and connectivity of the SNN). The default parameter values used in the simulations are displayed in Table 1. The values of state durations are such that their ratio are not integers, in order to avoid spurious synchronizations. Note that state duration timescale is not significant at the ant colony level. Parameter type Symbol Neural ? Vrest ? Agent Population Description Value (units) Membrane relaxation constant 0.1 mV?1 Resting potential 0.0 mV 1.0 mV Spike firing threshold p Vreset Passive neuron reset potential Vthres Active neuron bifurcation threshold a Vreset Active neuron reset potential Iclock Active neuron constant input current -0.1 mV 0.5 mV 0.55 mV 0.1 mV w Synaptic weight 0.01 mV?1 tF Foraging duration 47.1 ms tO Maximum observation duration 10.5 ms tS Sleeping duration 45.7 ms tG Self-grooming duration 16.7 ms ? Connection probability 0.3 population size 150 M agents Table 1: Neural, model and population parameters used in simulations. 3 Experiments This section reports on the experimental study of the SpikeAnts model, first describing the experimental setting and the goals of experiments. The population behavior is measured after a global indicator, and the sensitivity thereof w.r.t. the SpikeAnts parameters is studied. Two compound control parameters, summarizing the model parameters and governing the emergent synchronization of the system are proposed. A consistent phase diagram depicting the global synchronization in the plane defined from both control parameters is displayed and discussed. Goals of experiments A first goal of experiments is to measure the global activity of the population, denoted F and defined as the overall time spent foraging: X F= nF (t) (4) t 4 where nF (t) is the number of foraging agents at time t. The study focuses on the sensitivity of F w.r.t. the model parameters. The second and most important goal of experiments is to study the temporal structure of the population activity. A synchronization indicator will be proposed and its sensitivity w.r.t. the model parameters will be examined. Experimental settings Each run starts with all ants initially sleeping. Each ant wakes up after some time uniformly drawn in ]0, 2tS ]. Spiking neurons are simulated using a discrete time scheme: numerical simulations of the spiking neuron network are based on a clock-driven simulator, using Runge-Kutta method for the approximation of differential equations, with a small time step of 0.1ms to enforce numerical stability. Each run lasts for 100,000 time steps. All reported results are averaged over 10 independent runs. 3.1 Sensitivity analysis of the foraging effort This section first examines how the overall foraging effort F depends on the size M of the popua lation, the connection rate ? and two neural parameters, the active neuron reset potential Vreset and the synaptic weight w. The average F? is reported with its standard deviation in Fig. 3. 800 F? F? 1000 600 400 200 0 200 400 600 800 1000 700 600 500 400 300 200 0 0.2 M 0.4 0.6 0.8 1 ? 1500 1000 F? F? 280 240 500 0 200 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 0 a Vreset 0.05 0.1 0.15 0.2 w ? versus population size M (top left), Figure 3: Sensitivity analysis of the average foraging effort F, a (bottom left) and synaptic connection probability ? (top right), active neuron reset potential Vreset weight w (bottom right). The overall foraging effort F was expected to linearly increase with the population size M . While it indeed increases with M , it displays a breaking down around M =600 (Fig. 3, top left); this unexpected change will be explained in section 3.2, and related to the increased variability of the population synchronization. F was expected to exponentially decrease with the connectivity ?, and it does so (Fig. 3, top right): the more neighbors, the more likely an ant will see other foraging ants, and will thus avoid go foraging itself. Along the same line, F was expected to decrease with the a a reset potential Vreset : the closer Vreset to ?, the more spikes a foraging ant will sent, exciting other ants? passive neuron and thereby sending these ants to rest (Fig. 3, bottom left; the value of ? is 1, a and F indeed goes to 0 as Vreset goes to 1). The most surprising result regards the influence of the synaptic weight w (Fig. 3, bottom right). It was expected that high w values would favor the triggering of passive neurons, and thus adversely affect the foraging effort. High w values however mostly result in a high variance of F. The interpretation proposed for this fact goes as follows. For low w values, an ant behaves as a ?good statistician?, meaning that its decision is based on observing many other foraging agents. Accordingly, the foraging/resting ratio is very stable along time and across runs. As w increases however, it makes it possible for an ant to take decisions based on few cues and the behavioral variability increases. More precisely, the F variance is low for small w values (an ant makes its decision based on about 80 spikes for w = 0.01). The variance dramatically increases in a narrow region around w = 0.15; an ant makes its decision based on circa 6 spikes and small variations in the received spike trains might thus lead to different decisions, explaining the high variance of F. For higher w 5 values however, the F variance decreases again. A close look at the experimental results reveals the existence of different temporal regimes with abrupt transitions among these, explaining the breaking down around M = 600 ants and the abrupt increase and decrease of F variance. Emergent synchronization: Control parameters and phase transitions B Synchronous aperiodic 250 200 150 100 50 0 500 1000 1500 0 2000 0 500 1000 1500 2000 1000 200 800 150 100 nF (t) Synchronous periodic 0 500 1000 1500 2000 Simulated time t 250 50 0 10 20 30 40 50 60 70 80 1000 900 800 700 600 500 400 300 200 100 0 Simulated time t nF (t + 1) nF (t + 1) Simulated time t 80 70 60 50 40 30 20 10 0 C nF (t) 80 70 60 50 40 30 20 10 0 Asynchronous nF (t) nF (t) A nF (t + 1) 3.2 600 400 200 50 100 150 200 250 nF (t) 200 400 600 800 1000 nF (t) Figure 4: (Top row) Asynchronous, synchronous aperiodic and synchronous periodic patterns of activity (number of foraging ants versus time for t = 1 . . . 2, 000). (Bottom row) Temporal correlation of the activity for the above three patterns, for t = 1 . . . 100, 000. The emergence of three synchronization patterns appears in the experimental results. The first one, referred to as asynchronous (Fig. 4, left), depicts a situation where each ant (almost) independently makes its own decisions. The second one, referred to as synchronous (Fig. 4, middle) displays some coordination among the ants; specifically, the number of foraging ants is piecewise constant, though varying from a time interval to another. The third pattern, referred to as periodic synchronous (Fig. 4, right) involves two stable subpopulations which forage alternatively; the population enters a bi-phase mode, as actually observed in some ants colonies [4, 5]. The difference between the three patterns of activity is most visible from the phase diagram plotting nF (t + 1) vs nF (t) (Fig. 4, bottom row; transient states are removed in the synchronized periodic and aperiodic regime for the sake of clarity). The orbit of the synchronous aperiodic activity indicates the presence of at least one attractor whereas the synchronous periodic activity displays a flip bifurcation. The ergodicity of the SpikeAnts system is first analyzed based on the Lyapunov exponents, after the computation algorithms proposed in [22]. On asynchronous patterns, the mean value of the 5,000 Lyapunov exponents found with an 8 dimension analysis is ?0.01 ? 0.1. For synchronous aperiodic patterns, the mean value of the 3,500 Lyapunov exponents found with a 6 dimension analysis is also ?0.01 ? 0.1 (after discarding the transient states). Whereas the asynchronous and synchronous aperiodic activities lie at the edge of chaos, the periodic synchronous regime only displays large negative Lyapunov exponents, indicating a very stable behavior. An entropy-based indicator is proposed to analyze the emergent synchronization of the SpikeAnts system. Let I denote the set of values nF (t) (after pruning all transient time steps such that nF (t) 6= nF (t + 1) and nF (t) 6= nF (t ? 1)); the foraging histogram is defined by associating to each value k in I, the number nk of time steps such that nF (t) = k. The synchronization of the population is 6 finally measured from the histogram entropy H: H=? nk log m nm X  P k?I nk m nm  (5) P The entropy of the asynchronous regime is zero, since all states are transient. The synchronous periodic regime, where two subpopulations alternatively forage, gets a low entropy (< log 2). Finally, the synchronous aperiodic regime which involves a few dozens of subpopulations, gets a high entropy value. The transition from one regime to another one is clearly related to the model parameters. The goal thus becomes to identify the influential factors, best explaining the population behavior. ? A first such influential factor, defined as ? M and referred to as sociability, controls the amount of interactions between the ants. A high sociability enables the ants to base their foraging decision on reliable estimates of the current foraging activity, thus entailing a low variance of the global foraging effort. A second influential factor, referred to as receptivity, is the ratio between the weight w of the input signal and the subthreshold range (depending on the resting potential Vrest and the spike firing w threshold ?). This ratio |??V indicates the amplitude of the depolarization induced by the input rest | spike compared to the difference between rest and threshold. A high receptivity thus enables the ant to postpone its foraging decision based on few cues (i.e. visible foraging ants), thereby entailing a high variance of the global foraging effort. The sociability and receptivity factors, referred to as control parameters, support a clear picture of the asynchronous, synchronous aperiodic and periodic synchronous patterns. The entropy (Fig. 5, left) and its variance (Fig. 5, right) are displayed in the 2D plane defined from the sociability and receptivity of the SpikeAnts system, defining the phase diagram of the SpikeAnts system. For a low sociability and a high receptivity (region A in Fig. 5), few interactions among ants take place and each ant makes its decisions based on few cues. In this region, the population is a collection of quasi independent individuals, and few ants (60 on average on Fig. 4) are foraging at any given time step. For a higher sociability and a low receptivity (region B in Fig. 5), ants see more of their peers and they base their decisions on reliable estimates of the foraging activity. A synchronization of the ant activities emerges, in the sense that many agents make their foraging decisions at the same time. Still, the synchronization remains aperiodic, i.e. the number of foraging ants varies from 50 to 240 (Fig. 4). 4.5 3.5 3 C 0.1 2.5 2 1.5 0.05 1 B 0 5 Receptivity 0.15 0.2 1.6 4 A H (mean) Receptivity 0.2 A 0.15 1.4 1.2 1 C 0.1 0.8 0.6 0.05 0.4 B 0.5 0.2 0 10 15 20 H (standard deviation) For a high sociability and a high receptivity (region C in Fig. 5), ants see many of their peers and they make their decisions based on few cues. In this case a periodic synchronized regime is observed, where two subpopulations alternatively go foraging (the first one involves ? 950 ants in Fig. 4). 0 25 0 Sociability 5 10 15 20 25 Sociability Figure 5: Emergence of synchronizations in the population activity: entropy H (left) and variance of H (right) versus the ant sociability and receptivity. The asynchronous pattern, with entropy H = 0 corresponds to a low sociability and high receptivity (region A). The synchronous aperiodic pattern, with high entropy, corresponds to a medium sociability and low receptivity (region B). The synchronous periodic pattern, H ? log 2, corresponds to both high sociability and receptivity (region C). 7 250 nF (t) 200 150 100 50 0 0 1000 2000 3000 4000 5000 t 6000 7000 8000 9000 10000 Figure 6: A representative simulation: the global behavior switches from a synchronous aperiodic regime to an asynchronous one before stabilizing in a periodic synchronous regime. Complementary experiments show abrupt transitions between the different regimes in the borderline regions. Specifically, an asynchronous aperiodic regime (region B) is prone to evolve into an asynchronous (region A) or periodic synchronous (region C) regimes (Figure 6). Quite the contrary, the periodic synchronous regime is stable, i.e. the population does not get back to any other regime after the periodic synchronous regime is installed. The aperiodic synchronous regime, though less stable than the periodic one, is far more stable than the asynchronous one. 4 Discussion The main contribution of this paper is a local and parsimonious model, accounting for individual decision making, which reproduces the emergence of synchronized activity in a complex system in a realistic way: the three different regimes obtained in simulation are comparable to the different patterns of activity observed in social insect colonies [7, 5, 4]. The synchronization patterns that emerge at the macroscopic scale can be fully controlled by several model parameters ruling the sociability of ants (whether an ant may observe many other ants) and their receptivity (whether an ant makes its foraging decision based on a few cues). The synchronization patterns are endogenous, with no external influence from the environment. Additionally, they do not rely on individual synchronizations, as each agent has a specific behavior, different from its neighbor and varying during simulation time. To our best knowledge, the SpikeAnts model is the first one accounting for a population behavior and based on spiking neurons. SpikeAnts captures both spatial and temporal features of the complex system in a deterministic way (as opposed to stochastic models). It does not require any external constraints or data. Most importantly, it does not require the agent to feature sophisticated skills (e.g. ?counting? its foraging neighbors). It is worth noting that SpikeAnts does not involve the resolution of differential equations: While spiking neurons are modelled in continuous time, their behavior is computed through finite differences, parameterized from the user-specified time step. In summary, SpikeAnts demonstrates that SNNs can be used to model a simple self-organizing system. It hopefully opens new perspectives for modelling emergent phenomena in complex systems. A first perspective for further research is to investigate the temporal dynamics of spike trains using standard approaches from neuroscience. The underlying question is whether the population synchronization can be facilitated, e.g. in the transient regime, by making spiking neurons sensitive to the synchrony of spike trains. The role of inhibition and the role of the excitation/inhibition balance in the emergence of synchronized patterns will be studied. In particular, the impact on the phase diagram of individual parameter variations will be analyzed. A second perspective is to endow SpikeAnts with some learning skills, e.g. adapting the connections weights w with a local unsupervised learning rule (e.g. Spike-Timing-Dependent Plasticity), in order to optimize the collective efficiency of the population. Along the same line, the ability of SpikeAnts to cope with external perturbations (e.g. affecting the number of foraging ants) will be investigated. Acknowledgments We thank Mathias Quoy, Universit?e Cergy, for many fruitful discussions about complex systems, and helpful remarks about this paper. We thank Jean-Louis Deneubourg and Jos?e Halloy, Universit?e Libre de Bruxelles, for many insights into the collective behavior of living systems. This work was supported by NSF grant No. PHY-9723972 and by the European Integrated Project SYMBRION. 8 References [1] E. Bonabeau, G. Theraulaz, and J.L. Deneubourg. Fixed response thresholds and the regulation of division of labor in insect societies. Bulletin of Mathematical Biology, 60(4):753?807?807, July 1998. [2] E. Bonabeau, G. Theraulaz, and J.L. Deneubourg. The synchronization of recruitement-based activities in ants. BioSystems, 45:195?211, 1998. [3] N. Brunel and X.J. Wang. What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. Journal of Neurophysiology, 90(1):415?430, 2003. [4] B.J. Cole. Short-term activity cycles in ants: Generation of periodicity by worker interaction. The American Naturalist, 137(2), 1991. [5] N.R. Franks and S. Bryant. Rhythmical patterns of activity within the nest of ants. Chemistry and Biology of Social Insects, pages 122?123, 1987. [6] W. Gerstner and W. Kistler. Spiking Neuron Models: Single Neurons, Population, Plasticity. Cambridge University Press, 2002. [7] S. Goss and J.L. Deneubourg. Autocatalysis as a source of synchronised rythmical activity in social insects. Insectes Sociaux, 35(3):310?315, 1988. [8] D. Hansel and G. Mato. Existence and stability of persistent states in large neuronal networks. Physical Review Letters, 86(18):4175?4178, April 2001. [9] D.O. Hebb. The Organization of Behaviour. Wiley, New York, 1949. [10] J.J. Hopfield and C.D. Brody. What is a moment? Transient synchrony as a collective mechanism for spatiotemporal integration. Proc. Natl. Acad. Sci., 98(3):1282?1287, 2001. [11] E.M. Izhikevich. Polychronization: Computation with spikes. Neural Computation, 18(2):245?282, 2006. [12] E.M. Izhikevich. Dynamical systems in neuroscience: the geometry of excitability and bursting, chapter One-Dimensional Systems. MIT Press, 2007. [13] H. Jaeger, W. Maass, and J. Principe. Special issue on echo state networks and liquid state machines (editorial). Neural Networks, 20(3):287?289, April 2007. [14] B.W. Knight. Dynamics of encoding in a population of neurons. The Journal of General Physiology, 59(6):734?766, June 1972. [15] P.E. Latham, B.J. Richmond, P.G. Nelson, and S. Nirenberg. Intrinsic dynamics in neuronal networks. i. theory. Journal of Neurophysiology, 83(2):808?827, February 2000. [16] W. Liu, A.F.T. Winfield, J. Sa, J. Chen, and L. Dou. Towards energy optimisation: Emergent task allocation in a swarm of foraging robots. Adaptive Behavior, 15(3):289?305, 2007. [17] R.E. Mirollo and S.H. Strogatz. Synchronization of pulse-coupled biological oscillators. SIAM Journal on Applied Mathematics, 50(6):1645?1662, 1990. [18] H. Paugam-Moisy and S.M. Bohte. Handbook of Natural Computing, chapter 10. Computing with Spiking Neuron Networks. Springer, 2010. (in press). [19] D. Phan, M.B. Gordon, and J.P. Nadal. Cognitive Economics, chapter Social interactions in economic theory: An insight from statistical mechanics, pages 335?358. Springer, 2004. [20] B. Schrauwen, L. B?using, and R. Legenstein. On computational power and the order-chaos phase transition in reservoir computing. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems, pages 1425?1432. MIT Press, 2008. [21] J. Triesch. Synergies between intrinsic and synaptic plasticity mechanisms. 19(4):885?909, 2007. Neural Computation, [22] A. Wolf, J. Swift, H. Swinney, and J. Vastano. Determining lyapunov exponents from a time series. Physica D: Nonlinear Phenomena, 16(3):285?317, 1985. 9
3900 |@word neurophysiology:2 middle:1 open:1 grey:1 simulation:6 pulse:1 accounting:3 thereby:2 moment:1 phy:1 sociaux:1 liu:1 series:1 liquid:1 interestingly:1 current:4 surprising:1 must:1 numerical:2 visible:2 periodically:2 plasticity:5 realistic:1 enables:2 v:1 cue:5 signalling:1 accordingly:2 plane:2 ith:2 core:1 short:2 contribute:1 rc:2 along:3 mathematical:1 differential:3 persistent:1 behavioral:1 expected:4 indeed:4 behavior:15 nor:1 mechanic:1 multi:1 sud:2 brain:1 integrator:1 inspired:2 simulator:1 lyon:1 actual:1 snn:3 perceives:2 becomes:1 project:1 underlying:1 medium:1 what:2 nadal:1 depolarization:1 contrasting:1 temporal:11 every:1 nf:20 bryant:1 returning:1 demonstrates:1 universit:2 control:7 unit:6 grant:1 louis:1 before:6 local:2 timing:1 consequence:1 installed:1 acad:1 encoding:1 analyzing:1 firing:4 triesch:1 inria:1 black:1 might:1 studied:3 bursting:3 examined:1 bi:1 range:1 averaged:1 acknowledgment:1 borderline:1 implement:1 postpone:1 thought:1 adapting:1 physiology:1 pre:4 subpopulation:4 get:3 close:1 influence:3 optimize:1 fruitful:1 deterministic:2 go:15 economics:1 duration:12 automaton:1 independently:1 resolution:1 stabilizing:1 abrupt:3 rule:2 examines:1 insight:2 importantly:1 population:23 stability:2 swarm:1 variation:2 discharge:1 resp:2 target:1 controlling:1 user:1 sociability:15 continues:1 sparsely:2 observed:4 bottom:6 role:2 enters:2 capture:1 wang:1 region:12 connected:3 cycle:1 episode:3 decrease:4 removed:1 knight:1 observes:1 mentioned:1 environment:2 dynamic:8 depend:1 entailing:2 distinctive:1 division:4 efficiency:2 upon:1 hopfield:1 emergent:7 various:1 chapter:3 dou:1 train:4 univ:3 fast:1 peer:2 quite:2 jean:1 plausible:1 say:1 otherwise:1 nirenberg:1 ability:2 favor:1 timescale:1 emergence:11 echo:2 itself:1 runge:1 triggered:1 interaction:7 reset:7 fr:3 relevant:1 organizing:1 description:1 competition:5 dirac:1 exploiting:1 jaeger:1 generating:1 bron:1 spent:1 coupling:2 depending:2 colony:10 measured:2 received:2 sa:1 eq:2 involves:4 come:1 synchronized:8 lyapunov:5 vrest:6 aperiodic:13 stochastic:1 enable:1 transient:8 bistable:1 kistler:1 explains:1 require:2 behaviour:1 biological:2 physica:1 sufficiently:2 around:3 stabilizes:1 achieves:1 proc:1 hansel:1 coordination:1 cole:2 sensitive:1 liris:2 tf:3 mit:2 clearly:1 aim:1 avoid:2 varying:2 endow:1 focus:2 june:1 modelling:3 indicates:4 richmond:1 lri:3 sense:1 summarizing:1 helpful:1 dependent:1 el:1 cnrs:3 integrated:1 initially:1 spurious:1 koller:1 quasi:1 france:3 quoy:1 tao:2 issue:1 overall:4 among:4 insect:7 denoted:1 exponent:5 art:1 lif:4 bifurcation:2 spatial:1 integration:1 field:3 once:1 aware:1 special:1 biology:2 look:1 unsupervised:1 mimic:1 report:1 piecewise:1 gordon:1 few:9 comprehensive:1 individual:5 delayed:1 replaced:1 phase:10 geometry:1 fire:10 statistician:1 attractor:1 organization:3 message:1 investigate:2 analyzed:2 circa:2 natl:1 edge:2 closer:1 worker:1 respective:1 exchanged:1 orbit:1 biosystems:1 instance:2 increased:1 tg:5 deviation:2 swinney:1 too:2 reported:2 foraging:52 varies:2 periodic:16 spatiotemporal:1 referring:1 thoroughly:1 density:1 sensitivity:5 siam:1 probabilistic:1 physic:1 receiving:1 jos:1 schrauwen:1 connectivity:2 again:2 nm:2 opposed:1 nest:2 external:6 adversely:1 american:1 cognitive:1 return:1 traini:2 potential:13 de:1 chemistry:1 mv:9 depends:2 closed:1 endogenous:1 analyze:2 observing:4 red:1 start:3 synchrony:5 contribution:1 variance:10 characteristic:1 subthreshold:1 identify:1 ant:72 vp:5 modelled:4 none:1 mesoscopic:1 worth:1 drive:2 mato:1 synaptic:8 against:1 energy:1 frequency:1 thereof:1 naturally:1 ele:1 knowledge:2 emerges:1 schedule:1 amplitude:1 sophisticated:1 actually:1 back:2 appears:1 higher:2 dt:2 response:1 april:2 though:2 governing:1 ergodicity:1 clock:3 correlation:1 working:1 receives:2 nonlinear:1 hopefully:1 defines:1 mode:1 izhikevich:2 evolution:1 bohte:1 excitability:1 maass:1 naturalist:1 round:4 during:6 self:9 excitation:2 m:8 complete:1 latham:1 passive:15 ranging:1 meaning:1 chaos:3 instantaneous:1 behaves:1 spiking:20 physical:1 exponentially:1 discussed:2 interpretation:1 resting:5 significant:1 cambridge:1 mathematics:2 stable:6 robot:1 inhibition:3 base:2 own:1 perspective:3 driven:2 compound:1 seen:1 greater:1 period:5 signal:11 living:3 dashed:1 forage:2 july:1 exceeds:1 long:1 va:6 controlled:3 impact:1 involving:1 optimisation:1 editorial:1 histogram:2 cell:3 sleeping:7 receive:1 whereas:2 affecting:1 irregular:1 interval:1 diagram:5 else:2 wake:1 sends:1 macroscopic:1 source:1 rest:4 subject:1 induced:1 sent:1 contrary:2 emitted:1 integer:1 orsay:2 presence:1 counting:1 noting:1 vital:1 bengio:1 variety:1 switch:3 affect:1 architecture:1 associating:1 triggering:1 economic:1 paugam:2 moisy:2 synchronous:25 whether:5 motivated:1 effort:7 proceed:1 york:1 remark:1 dramatically:1 tij:2 clear:1 involve:3 amount:1 dark:1 nsf:1 inhibitory:4 dotted:1 neuroscience:3 discrete:1 shall:1 group:1 thereafter:1 four:2 threshold:11 drawn:1 clarity:1 neither:1 relaxation:2 run:4 facilitated:1 parameterized:1 letter:1 place:2 almost:1 reader:1 ruling:1 oscillation:2 parsimonious:1 legenstein:1 decision:20 comparable:1 bound:1 brody:1 tackled:1 display:5 sleep:3 quadratic:1 activity:27 precisely:1 constraint:1 sake:1 influential:3 membrane:4 describes:1 across:1 making:4 biologically:1 dv:2 explained:1 ene:1 equation:3 remains:1 describing:1 eventually:1 mechanism:4 count:1 needed:1 flip:1 end:1 sending:3 junction:1 decentralized:1 observe:1 enforce:3 existence:2 top:5 rhythmical:2 assembly:3 lation:1 society:3 february:1 iexc:5 question:1 spike:25 chevallier:1 exhibit:1 bruxelles:1 win:1 kutta:1 thank:2 simulated:4 sci:1 nelson:1 presynaptic:3 cellular:1 providing:1 ratio:4 balance:2 regulation:1 mostly:2 qif:5 frank:1 negative:1 collective:5 upper:1 neuron:67 observation:15 markov:1 finite:1 t:6 displayed:3 situation:1 defining:1 variability:2 leeping:1 interacting:1 perturbation:2 frugal:1 paris:2 specified:1 connection:7 narrow:1 suggested:1 dynamical:2 pattern:20 regime:19 saclay:1 green:1 reliable:2 power:1 natural:1 rely:2 synchronize:1 indicator:3 swift:1 scheme:2 temporally:1 picture:1 coupled:3 vreset:14 review:1 evolve:1 determining:1 synchronization:19 fully:1 grooming:10 generation:1 allocation:1 versus:3 integrate:2 agent:35 consistent:1 exciting:1 plotting:1 editor:1 row:3 prone:1 excitatory:5 summary:1 accounted:1 supported:1 last:4 asynchronous:13 periodicity:1 neighbor:8 explaining:3 bulletin:1 emerge:2 rhythmic:1 sparse:1 leaky:1 distributed:2 regard:1 default:1 dimension:2 transition:8 rich:1 collection:1 adaptive:1 simplified:1 far:1 social:5 cope:1 pruning:1 skill:2 synergy:1 global:6 active:22 incoming:1 reveals:1 reproduces:1 handbook:1 assumed:1 spatio:2 alternatively:3 continuous:1 mich:1 table:2 additionally:4 depicting:1 schuurmans:1 investigated:1 complex:9 meanwhile:1 european:1 gerstner:1 bottou:1 main:1 linearly:1 arrow:2 whole:2 libre:1 sebag:2 complementary:1 neuronal:2 reservoir:2 fig:21 referred:7 representative:1 depicts:1 hebb:1 wiley:1 lie:1 breaking:2 third:1 dozen:1 down:2 familiarity:1 discarding:1 emphasized:1 specific:1 symbol:1 dominates:1 exists:1 intrinsic:4 nk:3 gap:1 chen:1 phan:1 entropy:9 likely:1 snns:5 labor:3 unexpected:1 strogatz:1 brunel:1 springer:2 corresponds:3 wolf:1 determines:1 relies:2 goal:5 presentation:1 vthres:6 towards:1 oscillator:1 change:1 sylvain:1 specifically:4 uniformly:1 called:1 mathias:1 engaged:1 experimental:5 meaningful:1 iinh:3 exception:1 formally:1 indicating:1 principe:1 internal:9 support:1 synchronised:1 phenomenon:4
3,203
3,901
Random Projections for k-means Clustering Christos Boutsidis Department of Computer Science RPI Anastasios Zouzias Department of Computer Science University of Toronto Petros Drineas Department of Computer Science RPI Abstract This paper discusses the topic of dimensionality reduction for k-means clustering. We prove that any set of n points in d dimensions (rows in a matrix A ? Rn?d ) can be projected into t = ?(k/?2 ) dimensions, for any ? ? (0, 1/3), in O(nd???2 k/ log(d)?) time, such that with constant probability the optimal k-partition of the point set is preserved within a factor of 2 + ? ?. The projection is done ? by post-multiplying A with a d ? t random matrix R having entries +1/ t or ?1/ t with equal probability. A numerical implementation of our technique and experiments on a large face images dataset verify the speed and the accuracy of our theoretical results. 1 Introduction The k-means clustering algorithm [16] was recently recognized as one of the top ten data mining tools of the last fifty years [20]. In parallel, random projections (RP) or the so-called Johnson-Lindenstrauss type embeddings [12] became popular and found applications in both theoretical computer science [2] and data analytics [4]. This paper focuses on the application of the random projection method (see Section 2.3) to the k-means clustering problem (see Definition 1). Formally, assuming as input a set of n points in d dimensions, our goal is to randomly project the points into d? dimensions, with d? ? d, and then apply a k-means clustering algorithm (see Definition 2) on the projected points. Of course, one should be able to compute the projection fast without distorting significantly the ?clusters? of the original point set. Our algorithm (see Algorithm 1) satisfies both conditions by computing the embedding in time linear in the size of the input and by distorting the ?clusters? of the dataset by a factor of at most 2 + ?, for some ? ? (0, 1/3) (see Theorem 1). We believe that the high dimensionality of modern data will render our algorithm useful and attractive in many practical applications [9]. Dimensionality reduction encompasses the union of two different approaches: feature selection, which embeds the points into a low-dimensional space by selecting actual dimensions of the data, and feature extraction, which finds an embedding by constructing new artificial features that are, for example, linear combinations of the original features. Let A be an n ? d matrix containing n d-dimensional points (A(i) denotes the i-th point of the set), and let k be the number of clusters (see also Section 2.2 for more notation). We slightly abuse notation by also denoting by A ? the n-point set formed by the rows of A. We say that an embedding f : A ? Rd with f (A(i) ) = A?(i) for all i ? [n] and some d? < d, preserves the clustering structure of A within a factor ?, for some ? ? 1, if finding an optimal clustering in A? and plugging it back to A is only a factor of ? worse than finding the optimal clustering directly in A. Clustering optimality and approximability are formally presented in Definitions 1 and 2, respectively. Prior efforts on designing provably accurate dimensionality reduction methods for k-means clustering include: (i) the Singular Value Decomposition (SVD), where one finds an embedding with image A? = Uk ?k ? Rn?k such that the clustering structure is preserved within a factor of two; (ii) random projections, where one projects the input points into t = ?(log(n)/?2 ) dimensions such that with constant probability the clustering structure is preserved within a factor of 1 + ? (see Section 2.3); (iii) SVD-based feature selection, where one can use the SVD to find c = ?(k log(k/?)/?2 ) actual features, i.e. an embedding with image A? ? Rn?c containing (rescaled) columns from A, such that with constant probability the clustering structure is preserved within a factor of 2 + ?. These results are summarized in Table 1. A head-to-head comparison of our algorithm with existing results allows us to claim the following improvements: (i) 1 Year 1999 2009 2010 Ref. [6] Folklore [5] This paper Description SVD - feature extraction RP - feature extraction SVD - feature selection RP - feature extraction Dimensions k ?(log(n)/?2 ) ?(k log(k/?)/?2 ) ?(k/?2 ) Time O(nd min{n, d}) O(nd???2 log(n)/ log(d)?) O(nd min{n, d}) O(nd???2 k/ log(d)?) Accuracy 2 1+? 2+? 2+? Table 1: Dimension reduction methods for k-means. In the RP methods the construction is done with random sign matrices and the mailman algorithm (see Sections 2.3 and 3.1, respectively). reduce the running time by a factor of min{n, d}??2 log(d)/k?, while losing only a factor of ? in the approximation accuracy and a factor of 1/?2 in the dimension of the embedding; (ii) reduce the dimension of the embedding and the running time by a factor of log(n)/k while losing a factor of one in the approximation accuracy; (iii) reduce the dimension of the embedding by a factor of log(k/?) and the running time by a factor of min{n, d}??2 log(d)/k?, respectively. Finally, we should point out that other techniques, for example the Laplacian scores [10] or the Fisher scores [7], are very popular in applications (see also surveys on the topic [8, 13]). However, they lack a theoretical worst case analysis of the form we describe in this work. 2 Preliminaries We start by formally defining the k-means clustering problem using matrix notation. Later in this section, we precisely describe the approximability framework adopted in the k-means clustering literature and fix the notation. Definition 1. [T HE K - MEANS CLUSTERING PROBLEM ] Given a set of n points in d dimensions (rows in an n ? d matrix A) and a positive integer k denoting the number of clusters, find the n ? k indicator matrix Xopt such that 2 (1) Xopt = arg min A ? XX ? A F . X?X 2 Here X denotes the set of all n ? k indicator matrices X. The functional F (A, X) = A ? XX ? A F is the so-called k-means objective function. An n ? k indicator matrix has exactly one non-zero element per row, which denotes cluster membership. Equivalently, for all i = 1, . . . , n and j = 1, . . . , k, the i-th point belongs to the j-th cluster if ? and only if Xij = 1/ zj , where zj denotes the number of points in the corresponding cluster. Note that X ? X = Ik , where Ik is the k ? k identity matrix. 2.1 Approximation Algorithms for k-means clustering Finding Xopt is an NP-hard problem even for k = 2 [3], thus research has focused on developing approximation algorithms for k-means clustering. The following definition captures the framework of such efforts. Definition 2. [ K - MEANS APPROXIMATION ALGORITHM ] An algorithm is a ??-approximation? for the k-means clustering problem (? ? 1) if it takes inputs A and k, and returns an indicator matrix X? that satisfies with probability at least 1 ? ?? , A ? X? X ? A 2 ? ? min A ? XX ? A 2 . (2) ? F F X?X In the above, ?? ? [0, 1) is the failure probability of the ?-approximation k-means algorithm. For our discussion, we fix the ?-approximation algorithm to be the one presented in [14], which guarantees ? = 1 + ?? ? O(1) for any ?? ? (0, 1] with running time O(2(k/? ) dn). 2.2 Notation Given an n ? d matrix A and an integer k with k < min{n, d}, let Uk ? Rn?k (resp. Vk ? Rd?k ) be the matrix of the top k left (resp. right) singular vectors of A, and let ?k ? Rk?k be a diagonal matrix containing the top 2 k singular values of A in non-increasing order. If we let ? be the rank of A, then A??k is equal to A ? Ak , with Ak = Uk ?k Vk? . By A(i) we denote the i-th row of A. For an index i taking values in the set {1, . . . , n} we write i ? [n]. We denote, in non-increasing order, the non-negative singular values of A by ?i (A) with i ? [?]. kAkF and kAk2 denote the Frobenius and the spectral norm of a matrix A, respectively. A? denotes the pseudo-inverse of A, i.e. the unique satisfying A = AA? A, A? AA? = A? , (AA? )? = AA? , and (A? A)? = A? A. Note also ? d ? n matrix ? that A 2 = ?1 (A ) = 1/?? (A) and kAk2 = ?1 (A) = 1/?? (A? ). A useful property of matrix norms is that for any two matrices C and T of appropriate dimensions, kCT kF ? kCkF kT k2 ; this is a stronger version of the standard submultiplicavity property. We call P a projector matrix if it is square and P 2 = P . We use E [Y ] and Var [Y ] to take the expectation and the variance of a random variable Y and P (e) to take the probability of an event e. We abbreviate ?independent identically distributed? to ?i.i.d.? and ?with probability? to ?w.p.?. Finally, all logarithms are base two. 2.3 Random Projections A classical result of Johnson and Lindenstrauss states that any n-point set in d dimensions - rows in a matrix A ? Rn?d - can be linearly projected into t = ?(log(n)/?2 ) dimensions while preserving pairwise distances within a factor of 1?? using a random orthonormal matrix [12]. Subsequent research simplified the proof of the above result by showing that such a projection can be generated using a d ? t random?Gaussian matrix R, i.e., a matrix whose entries are i.i.d. Gaussian random variables with zero mean and variance 1/ t [11]. More precisely, the following inequality holds with high probability over the randomness of R, (1 ? ?) A(i) ? A(j) 2 ? A(i) R ? A(j) R 2 ? (1 + ?) A(i) ? A(j) 2 . (3) Notice that such an embedding A? = AR preserves the metric structure of the point-set, so it also preserves, within a factor of 1 + ?, the optimal value of the k-means objective function of A. Achlioptas proved that even a (rescaled) random sign matrix suffices in order to get the same guarantees as above [1], an approach that we adopt here (see step two in Algorithm 1). Moreover, in this paper we will heavily exploit the structure of such a random matrix, and obtain, as an added bonus, savings on the computation of the projection. 3 A random-projection-type k-means algorithm Algorithm 1 takes as inputs the matrix A ? Rn?d , the number of clusters k, an error parameter ? ? (0, 1/3), and some ?-approximation k-means algorithm. It returns an indicator matrix X?? determining a k-partition of the rows of A. Input: n ? d matrix A (n points, d features), number of clusters k, error parameter ? ? (0, 1/3), and ?-approximation k-means algorithm. Output: Indicator matrix X?? determining a k-partition on the rows of A. 1. Set t = ?(k/?2 ), i.e. set t = to ? ck/?2 for a sufficiently large constant c. 2. Compute a random d ? t matrix R as follows. For all i ? [d], j ? [t] ?  +1/ t, w.p. 1/2, ? Rij = ?1/ t, w.p. 1/2. 3. Compute the product A? = AR. 4. Run the ?-approximation algorithm on A? to obtain X?? ; Return the indicator matrix X?? Algorithm 1: A random projection algorithm for k-means clustering. 3.1 Running time analysis Algorithm 1 reduces the dimensions of A by post-multiplying it with a random sign matrix R. Interestingly, any ?random projection matrix? R that respects the properties of Lemma 2 with t = ?(k/?2 ) can be used in this step. If R is constructed as in Algorithm 1, one can employ the so-called mailman algorithm for matrix multiplication [15] and 3 compute the product AR in O(nd???2 k/ log(d)?) time. Indeed, the mailman algorithm computes (after preprocessing ) a matrix-vector product of any d-dimensional vector (row of A) with an d ? log(d) sign matrix in O(d) time. By partitioning the columns of our d ? t matrix R into ?t/ log(d)? blocks, the claim follows. Notice that when k = O(log(d)), then we get an - almost - linear time complexity O(nd/?2 ). The latter assumption is reasonable in our setting since the need for dimension reduction in k-means clustering arises usually in high-dimensional data (large d). Other choices of R would give the same approximation results; the time complexity to compute the embedding would ? be different though. A matrix where each entry is a random Gaussian variable with zero mean and variance 1/ t would imply an O(knd/?2 ) time complexity (naive multiplication). In our experiments in Section 5 we experiment with the matrix R described in Algorithm 1 and employ MatLab?s matrix-matrix BLAS implementation to proceed in the third step of the algorithm. We also experimented with a novel MatLab/C implementation of the mailman algorithm but, in the general case, we were not able to outperform MatLab?s built-in routines (see section 5.2). 1 Finally, note that any ?-approximation algorithm may be used in the last step of Algorithm 1. Using, for example, the algorithm of [14] with ? = 1 + ? would result in an algorithm that preserves the clustering within a factor of O(1) 2 + ?, for any ? ? (0, 1/3), running in time O(nd???2 k/ log(d)? + 2(k/?) kn/?2 ). In practice though, the Lloyd algorithm [16, 17] is very popular and although it does not admit a worst case theoretical analysis, it empirically does well. We thus employ the Lloyd algorithm for our experimental evaluation of our algorithm in Section 5. Note that, after using the proposed dimensionality reduction method, the cost of the Lloyd heuristic is only O(nk 2 /?2 ) per iteration. This should be compared to the cost of O(knd) per iteration if applied on the original high dimensional data. 4 Main Theorem Theorem 1 is our main quality-of-approximation result for Algorithm 1. Notice that if ? = 1, i.e. if the k-means problem with inputs A? and k is solved exactly, Algorithm 1 guarantees a distortion of at most 2 + ?, as advertised. Theorem 1. Let the n ? d matrix A and the positive integer k < min{n, d} be the inputs of the k-means clustering problem. Let ? ? (0, 1/3) and assume access to a ?-approximation k-means algorithm. Run Algorithm 1 with inputs A, k, ?, and the ?-approximation algorithm in order to construct an indicator matrix X?? . Then with probability at least 0.97 ? ?? , 2 ? A ? X?? X??? A 2 ? (1 + (1 + ?)?) A ? Xopt Xopt (4) A F . F Proof of Theorem 1 The proof of Theorem 1 employs several results from [19] including Lemma 6, 8 and Corollary 11. We summarize these results in Lemma 2 below. Before employing Corollary 11, Lemma 6, and Lemma 8 from [19] we need to make sure that the matrix R constructed in Algorithm 1 is consistent with Definition 1 and Lemma 5 in [19]. Theorem 1.1 of [1] immediately shows that the random sign matrix R of Algorithm 1 satisfies Definition 1 and Lemma 5 in [19]. Lemma 2. Assume that the matrix R is constructed by using Algorithm 1 with inputs A, k and ?. 1. Singular Values Preservation: For all i ? [k] and w.p. at least 0.99, |1 ? ?i (Vk? R)| ? ?. 2. Matrix Multiplication: For any two matrices S ? Rn?d and T ? Rd?k , h 2 i 2 2 2 E ST ? SRR? T F ? kSkF kT kF . t h i 2 2 4 3. Moments: For any C ? Rn?d : E kCRkF = kCkF and Var [kCRkF ] ? 2 kCkF /t. The first statement above assumes c being sufficiently large (see step 1 of Algorithm 1). We continue with several novel results of general interest. 1 Reading the input d ? log d sign matrix requires O(d log d) time. However, in our case we only consider multiplication with a random sign matrix, therefore we can avoid the preprocessing step by directly computing a random correspondence matrix as discussed in [15, Preprocessing Section]. 4 Lemma 3. Under the same assumptions as in Lemma 2 and w.p. at least 0.99, ? ? (Vk R) ? (Vk? R)? ? 3?. (5) 2 Proof. Let ? = Vk? R; note that ? is a k ? t matrix and the SV D of ? is ? = U? ?? V?? , where U? and ?? are k ? k ? matrices, and V? is a t ? k matrix. By taking the SVD of (Vk? R) and (Vk? R)? we get ? ? ? ? V? (??1 ? ?? )U?? = ??1 ? ?? , (Vk R) ? (Vk? R)? = V? ??1 ? U? ? V? ?? U? 2 = ? ? 2 2 2 since V? and U?? can be dropped without changing any unitarily invariant norm. Let ? = ??1 ? ? ?? ; ? is a k ? k diagonal matrix. Assuming that, for all i ? [k], ?i (?) and ?i (?) denote the i-th largest singular value of ? and the i-th diagonal element of ?, respectively, it is ?i (?) = 1 ? ?i (?)?k+1?i (?) . ?k+1?i Since ? is a diagonal matrix, k?k2 = max ?i (?) = max 1?i?k 1?i?k 1 ? ?i (?)?k+1?i (?) . ?k+1?i (?) The first statement of Lemma 2, our choice of ? ? (0, 1/3) and elementary calculations suffice to conclude the proof. Lemma 4. Under the same assumptions as in Lemma 2 and for any n ? d matrix C w.p. at least 0.99, p (1 + ?) kCkF . kCRkF ? (6) Proof. Notice that there exists a sufficiently large constant c such that t ? ck/?2 . Then, setting Z = kCRk2F , using the third statement of Lemma 2, the fact that k ? 1, and Chebyshev?s inequality we get   2 kCk4F 2 Var [Z] 2 P |Z ? E [Z] | ? ? kCkF ? ? 4 4 ? ck ? 0.01. ?2 kCkF t?2 kCkF The last inequality follows assuming c sufficiently large. Finally, taking square root on both sides concludes the proof. Lemma 5. Under the same assumptions as in Lemma 2 and w.p. at least 0.97, ? Ak = (AR)(Vk? R) Vk? + E, (7) where E is an n ? d matrix with kEkF ? 4? kA ? Ak kF . ? ? Proof. Since (AR)(Vk? R) Vk? is an n ? d matrix, let us write E = Ak ? (AR)(Vk? R) Vk? . Then, setting A = Ak + A??k , and using the triangle inequality we get ? ? kEkF ? Ak ? Ak R(Vk? R) Vk? + A??k R(Vk? R) Vk? . F F ? The first statement of Lemma 2 implies that rank(Vk? R) = k thus (Vk? R)(Vk? R) ? matrix. Replacing Ak = Uk ?k Vk? and setting (Vk? R)(Vk? R) = Ik we get that = Ik , where Ik is the k ? k identity ? ? Ak ? Ak R(Vk? R) Vk? = Ak ? Uk ?k Vk? R(Vk? R) Vk? = Ak ? Uk ?k Vk? F = 0. F F To bound the second term above, we drop Vk? , add and subtract the matrix A??k R(Vk? R)? Vk? , and use the triangle inequality and submultiplicativity: ? ? A??k R(Vk? R) Vk? ? A??k R(Vk? R)? F + A??k R((Vk? R) ? (Vk? R)? ) F F ? ? ? ? ? ? A??k RR Vk F + kA??k RkF (Vk R) ? (Vk R) . 2 5 Now we will bound each term individually. A crucial observation for bounding the first term is that A??k Vk = ? U??k ???k V??k Vk = 0 by orthogonality of the columns of Vk and V??k . This term now can be bounded using the second statement of Lemma 2 with S = A??k and T = Vk . This statement, assuming c sufficiently large, and an application of Markov?s inequality on the random variable A??k RR? Vk ? A??k Vk F give that w.p. at least 0.99, A??k RR? Vk ? 0.5? kA??k k . (8) F F The second two terms can be bounded using Lemma 3 and Lemma 4 on C = A??k . Hence by applying a union bound on Lemma 3, Lemma 4 and Inq. (8), we get that w.p. at least 0.97, ? kEkF ? A??k RR? Vk F + kA??k RkF (Vk? R) ? (Vk? R)? 2 p ? 0.5? kA??k kF + (1 + ?) kA??k kF ? 3? ? 0.5? kA??k kF + 3.5? kA??k kF = 4? ? kA??k kF . The last inequality holds thanks to our choice of ? ? (0, 1/3). Proposition 6. A well-known property connects the SVD of a matrix and k-means clustering. Recall Definition 1, and ? notice that Xopt Xopt A is a matrix of rank at most k. From the SVD optimality we immediately get that 2 2 2 ? kA??k kF = kA ? Ak kF ? A ? Xopt Xopt A F . (9) 4.1 The proof of Eqn. (4) of Theorem 1 2 We start by manipulating the term A ? X?? X??? A F in Eqn. (4). Replacing A by Ak + A??k , and using the Pythagorean theorem (the subspaces spanned by the components Ak ? X?? X??? Ak and A??k ? X?? X??? A??k are perpendicular) we get A ? X?? X??? A 2 = (I ? X?? X??? )Ak 2 + (I ? X?? X??? )A??k 2 . (10) F F | {z } {z }F | ?12 ?22 We first bound the second term of Eqn. (10). Since I ? X?? X??? is a projector matrix, it can be dropped without increasing a unitarily invariant norm. Now Proposition 6 implies that 2 2 ? ?22 ? kA??k kF ? A ? Xopt Xopt A F . (11) We now bound the first term of Eqn. (10): ? ?1 ? (I ? X?? X??? )AR(Vk R) Vk? + kEkF F ? ? ? (I ? X?? X?? )AR F (Vk R) + kEkF 2 ? ? ? ? ? (I ? Xopt Xopt )AR F (Vk R) + kEkF 2 ? p 1 ? ? ? ? (1 + ?) (I ? Xopt Xopt )A F + 4? (I ? Xopt Xopt )A F 1?? ? ? ? ? ? ?(1 + 2.5?) (I ? Xopt Xopt )A F + ? 4? (I ? Xopt Xopt )A F ? ? ?(1 + 6.5?) (I ? Xopt Xopt )A ? F (12) (13) (14) (15) (16) (17) ? ?? is a projector matrix and can be ?? X In Eqn. (12) we used Lemma 5, the triangle inequality, and the fact that I ? X dropped without increasing a unitarily invariant norm. In Eqn. (13) we used submultiplicativity (see Section 2.2) and the fact?that Vk? can be dropped without changing the spectral norm. In Eqn. (14) we replaced X?? by Xopt and the factor ? appeared in the first term. To better understand this step, notice that X?? gives a ?-approximation to the optimal k-means clustering of the matrix AR, and any other n ? k indicator matrix (for example, the matrix Xopt ) satisfies   I ? X?? X ? AR 2 ? ? min (I ? XX ? )AR 2 ? ? I ? Xopt X ? AR 2 . opt ? ? F F F X?X 6 P vs. t F vs. t 0.036 1 0.9 0.034 Time of k?means procedure in seconds T 0.65 0.032 Mis?classification rate P normalized objective function value F T vs. t 0.7 0.03 0.028 0.026 0.6 0.55 0.5 0.45 0.024 0.7 0.6 0.5 0.4 0.3 0.2 0.4 0.022 0.02 0.8 0.1 0 50 100 150 200 number of dimensions t 250 300 0.35 0 50 100 150 200 number of dimensions t 250 300 0 0 50 100 150 200 number of dimensions t 250 300 Figure 1: The results of our experiments after running Algorithm 1 with k = 40 on the face images collection. ? In Eqn. (15) we used Lemma 4 with C = (I ? Xopt Xopt )A, Lemma 3 and Proposition 6. In Eqn. (16) we used the ? fact that ? ? 1 and that for any ? ? (0, 1/3) it is ( 1 + ?)/(1 ? ?) ? 1 + 2.5?. Taking squares in Eqn. (17) we get 2 ? ?12 ? ?(1 + 28?) (I ? Xopt Xopt )A F . Finally, rescaling ? accordingly and applying the union bound on Lemma 5 and Definition 2 concludes the proof. 5 Experiments This section describes an empirical evaluation of Algorithm 1 on a face images collection. We implemented our algorithm in MatLab and compared it against other prominent dimensionality reduction techniques such as the Local Linear Embedding (LLE) algorithm and the Laplacian scores for feature selection. We ran all the experiments on a Mac machine with a dual core 2.26 Ghz processor and 4 GB of RAM. Our empirical findings are very promising indicating that our algorithm and implementation could be very useful in real applications involving clustering of large-scale data. 5.1 An application of Algorithm 1 on a face images collection We experiment with a face images collection. We downloaded the images corresponding to the ORL database from [21]. This collection contains 400 face images of dimensions 64 ? 64 corresponding to 40 different people. These images form 40 groups each one containing exactly 10 different images of the same person. After vectorizing each 2-D image and putting it as a row vector in an appropriate matrix, one can construct a 400 ? 4096 image-by-pixel matrix A. In this matrix, objects are the face images of the ORL collection while features are the pixel values of the images. To apply the Lloyd?s heuristic on A, we employ MatLab?s function kmeans with the parameter determining the maximum number of repetitions setting to 30. We also chose a deterministic initialization of the Lloyd?s iterative ? E-M procedure, i.e. whenever we call kmeans with inputs a matrix A? ? R400?d , with d? ? 1, and the integer k = 40, ? respectively. Note that this initialization we initialize the cluster centers with the 1-st, 11-th,..., 391-th rows of A, corresponds to picking images from the forty different groups of the available collection, since the images of every group are stored sequentially in A. We evaluate the clustering outcome from two different perspectives. First, we measure and report the objective function F of the k-means clustering problem. In particular, we report a normalized version of F , i.e. F? = F/||A||2F . Second, we report the mis-classification accuracy of the clustering result. We denote this number by P (0 ? P ? 1), where P = 0.9, for example, implies that 90% of the objects were assigned to the correct cluster after the application of the clustering algorithm. In the sequel, we first perform experiments by running Algorithm 1 with everything fixed but t, which denotes the dimensionality of the projected data. Then, for four representative values of t, we compare Algorithm 1 with three other dimensionality reduction methods as well with the approach of running the Lloyd?s heuristic on the original high dimensional data. We run Algorithm 1 with t = 5, 10, ..., 300 and k = 40 on the matrix A described above. Figure 1 depicts the results of our experiments. A few interesting observations are immediate. First, the normalized objective function F? is a piece-wise non-increasing function of the number of dimensions t. The decrease in F? is large in the first few choices 7 SVD LLE LS HD RP t = 10 P F 0.5900 0.0262 0.6500 0.0245 0.3400 0.0380 0.6255 0.0220 0.4225 0.0283 t = 20 P F 0.6750 0.0268 0.7125 0.0247 0.3875 0.0362 0.6255 0.0220 0.4800 0.0255 t = 50 P F 0.7650 0.0269 0.7725 0.0258 0.4575 0.0319 0.6255 0.0220 0.6425 0.0234 t = 100 P F 0.6500 0.0324 0.6150 0.0337 0.4850 0.0278 0.6255 0.0220 0.6575 0.0219 Table 2: Numerics from our experiments with five different methods. of t; then, increasing the number of dimensions t of the projected data decreases F? by a smaller value. The increase of t seems to become irrelevant after around t = 90 dimensions. Second, the mis-classification rate P is a piece-wise non-decreasing function of t. The increase of t seems to become irrelevant again after around t = 90 dimensions. Another interesting observation of these two plots is that the mis-classification rate is not directly relevant to the objective function F . Notice, for example, that the two have different behavior from t = 20 to t = 25 dimensions. Finally, we report the running time T of the algorithm which includes only the clustering step. Notice that the increase in the running time is - almost - linear with the increase of t. The non-linearities in the plot are due to the fact that the number of iterations that are necessary to guarantee convergence of the Lloyd?s method are different for different values of t. This observation indicates that small values of t result to significant computational savings, especially when n is large. Compare, for example, the one second running time that is needed to solve the k-means problem when t = 275 against the 10 seconds that are necessary to solve the problem on the high dimensional data. To our benefit, in this case, the multiplication AR takes only 0.1 seconds resulting to a total running time of 1.1 seconds which corresponds to an almost 90% speedup of the overall procedure. We now compare our algorithm against other dimensionality reduction techniques. In particular, in this paragraph we present head-to-head comparisons for the following five methods: (i) SVD: the Singular Value Decomposition (or Principal Components Analysis) dimensionality reduction approach - we use MatLab?s svds function; (ii) LLE: the famous Local Linear Embedding algorithm of [18] - we use the MatLab code from [23] with the parameter K determining the number of neighbors setting equal to 40; (iii) LS: the Laplacian score feature selection method of [10] - we use the MatLab code from [22] with the default parameters2; (v) HD: we run the k-means algorithm on the High Dimensional data; and (vi) RP: the random projection method we proposed in this work - we use our own MatLab implementation. The results of our experiments on A, k = 40 and t = 10, 20, 50, 100 are shown in Table 2. In terms of computational complexity, for example t = 50, the time (in seconds) needed for all five methods (only the dimension reduction step) are TSV D = 5.9, TLLE = 4.4, TLS = 0.32, THD = 0, and TRP = 0.03. Notice that our algorithm is much faster than the other approaches while achieving worse (t = 10, 20), slightly worse (t = 50) or slightly better (t = 100) approximation accuracy results. 5.2 A note on the mailman algorithm for matrix-matrix and matrix-vector multiplication In this section, we compare three different implementations of the third step of Algorithm 1. As we already discussed in Section 3.1, the mailman algorithm is asymptotically faster than naively multiplying the two matrices A and R. In this section we want to understand whether this asymptotic behavior of the mailman algorithm is indeed achieved in a practical implementation. We compare three different approaches for the implementation of the third step of our algorithm: the first is MatLab?s function times(A, R) (MM1); the second exploits the fact that we do not need to explicitly store the whole matrix R, and that the computation can be performed on the fly (column-by-column) (MM2); the last is the mailman algorithm [15] (see Section 3.1 for more details). We implemented the last two algorithms in C using MatLab?s MEX technology. We observed that when A is a vector (n = 1), then the mailman algorithm is indeed faster than (MM1) and (MM2) as it is also observed in the numerical experiments of [15]. Moreover, it?s worth-noting that (MM2) is also superior compared to (MM1). On the other hand, our best implementation of the mailman algorithm for matrix-matrix operations is inferior to both (MM1) and (MM2) for any 10 ? n ? 10, 000. Based on these findings, we chose to use (MM1) for our experimental evaluations. Acknowledgments: Christos Boutsidis was supported by NSF CCF 0916415 and a Gerondelis Foundation Fellowship; Petros Drineas was partially supported by an NSF CAREER Award and NSF CCF 0916415. 2 In particular, we run W = constructW (A); Scores = LaplacianScore(A, W ); 8 References [1] D. Achlioptas. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System Science, 66(4):671?687, 2003. [2] N. Ailon and B. Chazelle. Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform. In ACM Symposium on Theory of Computing (STOC), pages 557?563, 2006. [3] D. Aloise, A. Deshpande, P. Hansen, and P. Popat. NP-hardness of Euclidean sum-of-squares clustering. Machine Learning, 75(2):245?248, 2009. [4] E. Bingham and H. Mannila. Random projection in dimensionality reduction: applications to image and text data. In ACM SIGKDD international conference on Knowledge discovery and data mining (KDD), pages 245? 250, 2001. [5] C. Boutsidis, M. W. Mahoney, and P. Drineas. Unsupervised feature selection for the k-means clustering problem. In Advances in Neural Information Processing Systems (NIPS), 2009. [6] P. Drineas, A. Frieze, R. Kannan, S. Vempala, and V. Vinay. Clustering in large graphs and matrices. In ACMSIAM Symposium on Discrete Algorithms (SODA), pages 291?299, 1999. [7] D. Foley and J. Sammon. An optimal set of discriminant vectors. IEEE Transactions on Computers, C-24(3):281? 289, March 1975. [8] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. Journal of Machine Learning Research, 3:1157?1182, 2003. [9] I. Guyon, S. Gunn, A. Ben-Hur, and G. Dror. Result analysis of the NIPS 2003 feature selection challenge. In Advances in Neural Information Processing Systems (NIPS), pages 545?552. 2005. [10] X. He, D. Cai, and P. Niyogi. Laplacian score for feature selection. In Advances in Neural Information Processing Systems (NIPS) 18, pages 507?514. 2006. [11] P. Indyk and R. Motwani Approximate nearest neighbors: towards removing the curse of dimensionality. In ACM Symposium on Theory of Computing (STOC), pages 604?613, 1998. [12] W. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemporary mathematics, 26(189-206):1?1, 1984. [13] E. Kokiopoulou, J. Chen and Y. Saad. Trace optimization and eigenproblems in dimension reduction methods. Numerical Linear Algebra with Applications, to appear. [14] A. Kumar, Y. Sabharwal, and S. Sen. A simple linear time (1+?)-approximation algorithm for k-means clustering in any dimensions. In IEEE Symposium on Foundations of Computer Science (FOCS), pages 454?462, 2004. [15] E. Liberty and S. Zucker. The Mailman algorithm: A note on matrix-vector multiplication. Information Processing Letters, 109(3):179?182, 2009. [16] S. Lloyd. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129?137, 1982. [17] R. Ostrovsky, Y. Rabani, L. J. Schulman, and C. Swamy. The effectiveness of Lloyd-type methods for the k-means problem. In IEEE Symposium on Foundations of Computer Science (FOCS), pages 165?176, 2006. [18] S. Roweis, and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:5500, pages 2323-2326, 2000. [19] T. Sarlos. Improved approximation algorithms for large matrices via random projections. In IEEE Symposium on Foundations of Computer Science (FOCS), pages 329?337, 2006. [20] X. Wu et al. Top 10 algorithms in data mining. Knowledge and Information Systems, 14(1):1?37, 2008. [21] http://www.cs.uiuc.edu/?dengcai2/Data/FaceData.html [22] http://www.cs.uiuc.edu/?dengcai2/Data/data.html [23] http://www.cs.nyu.edu/?roweis/lle/ 9
3901 |@word version:2 knd:2 norm:6 stronger:1 nd:8 seems:2 sammon:1 decomposition:2 elisseeff:1 moment:1 reduction:14 contains:1 score:6 selecting:1 denoting:2 interestingly:1 existing:1 ka:12 chazelle:1 rpi:2 numerical:3 partition:3 subsequent:1 kdd:1 drop:1 plot:2 v:3 accordingly:1 core:1 toronto:1 five:3 dn:1 constructed:3 become:2 symposium:6 ik:5 focs:3 prove:1 acmsiam:1 paragraph:1 pairwise:1 hardness:1 indeed:3 behavior:2 uiuc:2 decreasing:1 actual:2 curse:1 increasing:6 project:2 xx:4 notation:5 moreover:2 bonus:1 suffice:1 bounded:2 linearity:1 dror:1 finding:5 guarantee:4 pseudo:1 every:1 friendly:1 exactly:3 k2:2 ostrovsky:1 uk:6 partitioning:1 appear:1 positive:2 before:1 dropped:4 local:2 ak:18 abuse:1 chose:2 initialization:2 kckf:7 analytics:1 perpendicular:1 practical:2 unique:1 acknowledgment:1 union:3 block:1 practice:1 mannila:1 xopt:30 procedure:3 empirical:2 significantly:1 projection:16 get:10 selection:9 applying:2 www:3 projector:3 deterministic:1 center:1 sarlos:1 l:2 survey:1 focused:1 immediately:2 orthonormal:1 spanned:1 hd:2 embedding:13 resp:2 construction:1 heavily:1 losing:2 designing:1 element:2 satisfying:1 gunn:1 database:2 observed:2 fly:1 rij:1 capture:1 worst:2 solved:1 svds:1 decrease:2 rescaled:2 contemporary:1 ran:1 complexity:4 algebra:1 triangle:3 drineas:4 fast:2 describe:2 artificial:1 outcome:1 whose:1 heuristic:3 solve:2 say:1 distortion:1 niyogi:1 transform:1 indyk:1 rr:4 cai:1 sen:1 product:3 relevant:1 roweis:2 description:1 frobenius:1 convergence:1 cluster:11 motwani:1 ben:1 object:2 nearest:2 dengcai2:2 implemented:2 c:3 implies:3 liberty:1 sabharwal:1 correct:1 everything:1 tlle:1 fix:2 suffices:1 preliminary:1 parameters2:1 proposition:3 opt:1 elementary:1 extension:1 hold:2 sufficiently:5 around:2 mapping:1 claim:2 adopt:1 srr:1 hansen:1 individually:1 largest:1 repetition:1 tool:1 gaussian:3 rkf:2 ck:3 avoid:1 corollary:2 focus:1 improvement:1 vk:58 rank:3 indicates:1 sigkdd:1 membership:1 manipulating:1 aloise:1 provably:1 pixel:2 arg:1 classification:4 dual:1 html:2 overall:1 submultiplicavity:1 initialize:1 equal:3 construct:2 saving:2 having:1 extraction:4 mm2:4 unsupervised:1 np:2 report:4 employ:5 few:2 modern:1 randomly:1 frieze:1 preserve:4 replaced:1 connects:1 interest:1 mining:3 evaluation:3 mahoney:1 accurate:1 kt:2 necessary:2 euclidean:1 logarithm:1 theoretical:4 column:5 ar:14 cost:2 mac:1 entry:3 johnson:5 stored:1 kn:1 sv:1 st:2 thanks:1 person:1 international:1 sequel:1 picking:1 again:1 containing:4 worse:3 admit:1 return:3 rescaling:1 summarized:1 lloyd:9 includes:1 explicitly:1 vi:1 piece:2 later:1 root:1 performed:1 start:2 parallel:1 formed:1 square:5 accuracy:6 became:1 variance:3 famous:1 multiplying:3 worth:1 randomness:1 processor:1 whenever:1 definition:10 failure:1 against:3 boutsidis:3 kct:1 deshpande:1 proof:10 mi:4 petros:2 dataset:2 proved:1 popular:3 recall:1 knowledge:2 hur:1 dimensionality:13 hilbert:1 routine:1 back:1 improved:1 done:2 though:2 kokiopoulou:1 achlioptas:2 hand:1 eqn:10 replacing:2 nonlinear:1 lack:1 quality:1 believe:1 unitarily:3 verify:1 normalized:3 ccf:2 hence:1 assigned:1 attractive:1 inq:1 inferior:1 prominent:1 image:18 wise:2 novel:2 recently:1 superior:1 functional:1 empirically:1 blas:1 discussed:2 he:2 significant:1 rd:3 mathematics:1 access:1 zucker:1 base:1 add:1 own:1 perspective:1 belongs:1 irrelevant:2 store:1 inequality:8 binary:1 continue:1 preserving:1 zouzias:1 recognized:1 forty:1 ii:3 preservation:1 reduces:1 anastasios:1 faster:3 calculation:1 post:2 award:1 plugging:1 laplacian:4 involving:1 expectation:1 metric:1 iteration:3 mex:1 achieved:1 preserved:4 want:1 fellowship:1 singular:7 crucial:1 fifty:1 saad:1 sure:1 effectiveness:1 integer:4 call:2 noting:1 iii:3 embeddings:1 identically:1 reduce:3 submultiplicativity:2 chebyshev:1 whether:1 distorting:2 gb:1 effort:2 render:1 proceed:1 matlab:11 kskf:1 useful:3 eigenproblems:1 ten:1 locally:1 http:3 outperform:1 xij:1 zj:2 nsf:3 notice:9 sign:7 per:3 write:2 discrete:1 group:3 putting:1 four:1 achieving:1 changing:2 ram:1 asymptotically:1 graph:1 year:2 sum:1 run:5 inverse:1 letter:1 soda:1 almost:3 reasonable:1 guyon:2 wu:1 orl:2 bound:6 correspondence:1 precisely:2 orthogonality:1 speed:1 rabani:1 optimality:2 approximability:2 min:9 kumar:1 vempala:1 speedup:1 department:3 developing:1 ailon:1 combination:1 march:1 describes:1 slightly:3 smaller:1 invariant:3 advertised:1 discus:1 needed:2 adopted:1 available:1 operation:1 apply:2 spectral:2 appropriate:2 coin:1 rp:6 swamy:1 original:4 top:4 clustering:35 denotes:6 include:1 running:13 assumes:1 folklore:1 exploit:2 especially:1 classical:1 objective:6 added:1 already:1 kak2:2 diagonal:4 subspace:1 distance:1 topic:2 kekf:6 discriminant:1 kannan:1 assuming:4 code:2 index:1 equivalently:1 statement:6 stoc:2 trace:1 negative:1 numerics:1 implementation:9 perform:1 observation:4 markov:1 immediate:1 defining:1 head:4 rn:8 nip:4 able:2 usually:1 below:1 appeared:1 reading:1 summarize:1 challenge:1 encompasses:1 built:1 including:1 max:2 event:1 indicator:9 abbreviate:1 technology:1 imply:1 concludes:2 naive:1 foley:1 text:1 prior:1 literature:1 vectorizing:1 discovery:1 kf:11 multiplication:7 determining:4 asymptotic:1 schulman:1 kakf:1 interesting:2 var:3 foundation:4 downloaded:1 consistent:1 row:11 course:1 supported:2 last:6 side:1 lle:4 understand:2 neighbor:3 saul:1 face:7 taking:4 mailman:11 distributed:1 ghz:1 dimension:29 benefit:1 lindenstrauss:5 default:1 computes:1 collection:7 projected:5 simplified:1 preprocessing:3 employing:1 transaction:2 approximate:2 sequentially:1 conclude:1 bingham:1 iterative:1 table:4 promising:1 career:1 vinay:1 constructing:1 main:2 linearly:1 bounding:1 whole:1 ref:1 representative:1 tl:1 depicts:1 embeds:1 christos:2 third:4 theorem:9 rk:1 removing:1 showing:1 popat:1 nyu:1 experimented:1 exists:1 naively:1 quantization:1 nk:1 chen:1 subtract:1 pcm:1 partially:1 trp:1 mm1:5 aa:4 corresponds:2 satisfies:4 acm:3 goal:1 identity:2 kmeans:2 towards:1 lipschitz:1 fisher:1 hard:1 lemma:26 principal:1 called:3 total:1 svd:10 experimental:2 indicating:1 formally:3 people:1 latter:1 arises:1 pythagorean:1 evaluate:1
3,204
3,902
Online Learning for Latent Dirichlet Allocation David M. Blei Department of Computer Science Princeton University Princeton, NJ [email protected] Matthew D. Hoffman Department of Computer Science Princeton University Princeton, NJ [email protected] Francis Bach INRIA?Ecole Normale Sup?erieure Paris, France [email protected] Abstract We develop an online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA). Online LDA is based on online stochastic optimization with a natural gradient step, which we show converges to a local optimum of the VB objective function. It can handily analyze massive document collections, including those arriving in a stream. We study the performance of online LDA in several ways, including by fitting a 100-topic topic model to 3.3M articles from Wikipedia in a single pass. We demonstrate that online LDA finds topic models as good or better than those found with batch VB, and in a fraction of the time. 1 Introduction Hierarchical Bayesian modeling has become a mainstay in machine learning and applied statistics. Bayesian models provide a natural way to encode assumptions about observed data, and analysis proceeds by examining the posterior distribution of model parameters and latent variables conditioned on a set of observations. For example, research in probabilistic topic modeling?the application we will focus on in this paper?revolves around fitting complex hierarchical Bayesian models to large collections of documents. In a topic model, the posterior distribution reveals latent semantic structure that can be used for many applications. For topic models and many other Bayesian models of interest, however, the posterior is intractable to compute and researchers must appeal to approximate posterior inference. Modern approximate posterior inference algorithms fall in two categories?sampling approaches and optimization approaches. Sampling approaches are usually based on Markov Chain Monte Carlo (MCMC) sampling, where a Markov chain is defined whose stationary distribution is the posterior of interest. Optimization approaches are usually based on variational inference, which is called variational Bayes (VB) when used in a Bayesian hierarchical model. Whereas MCMC methods seek to generate independent samples from the posterior, VB optimizes a simplified parametric distribution to be close in Kullback-Leibler divergence to the posterior. Although the choice of approximate posterior introduces bias, VB is empirically shown to be faster than and as accurate as MCMC, which makes it an attractive option when applying Bayesian models to large datasets [1, 2, 3]. Nonetheless, large scale data analysis with VB can be computationally difficult. Standard ?batch? VB algorithms iterate between analyzing each observation and updating dataset-wide variational parameters. The per-iteration cost of batch algorithms can quickly become impractical for very large datasets. In topic modeling applications, this issue is particularly relevant?topic modeling promises 1 900 Online 98K Perplexity 850 800 750 Batch 98K Online 3.3M 700 650 600 103.5 Documents analyzed Top eight words 2048 4096 104 104.5 105 Documents seen (log scale) 8192 systems systems service road health systems made communication health service service companies announced billion market national language communication west care company language road billion 12288 16384 105.5 32768 106 106.5 49152 65536 service service business business business systems companies service service industry companies systems companies companies service business business industry industry companies company company company services services billion industry management company company health market systems management management industry billion services public public Figure 1: Top: Perplexity on held-out Wikipedia documents as a function of number of documents analyzed, i.e., the number of E steps. Online VB run on 3.3 million unique Wikipedia articles is compared with online VB run on 98,000 Wikipedia articles and with the batch algorithm run on the same 98,000 articles. The online algorithms converge much faster than the batch algorithm does. Bottom: Evolution of a topic about business as online LDA sees more and more documents. to summarize the latent structure of massive document collections that cannot be annotated by hand. A central research problem for topic modeling is to efficiently fit models to larger corpora [4, 5]. To this end, we develop an online variational Bayes algorithm for latent Dirichlet allocation (LDA), one of the simplest topic models and one on which many others are based. Our algorithm is based on online stochastic optimization, which has been shown to produce good parameter estimates dramatically faster than batch algorithms on large datasets [6]. Online LDA handily analyzes massive collections of documents and, moreover, online LDA need not locally store or collect the documents? each can arrive in a stream and be discarded after one look. In the subsequent sections, we derive online LDA and show that it converges to a stationary point of the variational objective function. We study the performance of online LDA in several ways, including by fitting a topic model to 3.3M articles from Wikipedia without looking at the same article twice. We show that online LDA finds topic models as good as or better than those found with batch VB, and in a fraction of the time (see figure 1). Online variational Bayes is a practical new method for estimating the posterior of complex hierarchical Bayesian models. 2 Online variational Bayes for latent Dirichlet allocation Latent Dirichlet Allocation (LDA) [7] is a Bayesian probabilistic model of text documents. It assumes a collection of K ?topics.? Each topic defines a multinomial distribution over the vocabulary and is assumed to have been drawn from a Dirichlet, ?k ? Dirichlet(?). Given the topics, LDA assumes the following generative process for each document d. First, draw a distribution over topics ?d ? Dirichlet(?). Then, for each word i in the document, draw a topic index zdi ? {1, . . . , K} from the topic weights zdi ? ?d and draw the observed word wdi from the selected topic, wdi ? ?zdi . For simplicity, we assume symmetric priors on ? and ?, but this assumption is easy to relax [8]. P Note that if we sum over the topic assignments z, then we get p(wdi |?d , ?) = k ?dk ?kw . This leads to the ?multinomial PCA? interpretation of LDA; we can think of LDA as a probabilistic factorization of the matrix of word counts n (where ndw is the number of times word w appears in document d) into a matrix of topic weights ? and a dictionary of topics ? [9]. Our work can thus 2 be seen as an extension of online matrix factorization techniques that optimize squared error [10] to more general probabilistic formulations. We can analyze a corpus of documents with LDA by examining the posterior distribution of the topics ?, topic proportions ?, and topic assignments z conditioned on the documents. This reveals latent structure in the collection that can be used for prediction or data exploration. This posterior cannot be computed directly [7], and is usually approximated using Markov Chain Monte Carlo (MCMC) methods or variational inference. Both classes of methods are effective, but both present significant computational challenges in the face of massive data sets.Developing scalable approximate inference methods for topic models is an active area of research [3, 4, 5, 11]. To this end, we develop online variational inference for LDA, an approximate posterior inference algorithm that can analyze massive collections of documents. We first review the traditional variational Bayes algorithm for LDA and its objective function, then present our online method, and show that it converges to a stationary point of the same objective function. 2.1 Batch variational Bayes for LDA In Variational Bayesian inference (VB) the true posterior is approximated by a simpler distribution q(z, ?, ?), which is indexed by a set of free parameters [12, 13]. These parameters are optimized to maximize the Evidence Lower BOund (ELBO): log p(w|?, ?) ?L(w, ?, ?, ?) , Eq [log p(w, z, ?, ?|?, ?)] ? Eq [log q(z, ?, ?)]. (1) Maximizing the ELBO is equivalent to minimizing the KL divergence between q(z, ?, ?) and the posterior p(z, ?, ?|w, ?, ?). Following [7], we choose a fully factorized distribution q of the form q(zdi = k) = ?dwdi k ; q(?d ) = Dirichlet(?d ; ?d ); q(?k ) = Dirichlet(?k ; ?k ), (2) The posterior over the per-word topic assignments z is parameterized by ?, the posterior over the perdocument topic weights ? is parameterized by ?, and the posterior over the topics ? is parameterized by ?. As a shorthand, we refer to ? as ?the topics.? Equation 1 factorizes to P  L(w, ?, ?, ?) = d Eq [log p(wd |?d , zd , ?)] + Eq [log p(zd |?d )] ? Eq [log q(zd )] (3) + Eq [log p(?d |?)] ? Eq [log q(?d )] + (Eq [log p(?|?)] ? Eq [log q(?)])/D . Notice we have brought the per-corpus terms into the summation over documents, and divided them by the number of documents D. This step will help us to derive an online inference algorithm. We now expand the expectations above to be functions of the variational parameters. This reveals that the variational objective relies only on ndw , the number of times word w appears in document d. When using VB?as opposed to MCMC?documents can be summarized by their word counts, P P P L = d w ndw k ?dwk (Eq [log ?dk ] + Eq [log ?kw ] ? log ?dwk ) P P ? log ?( k ?dk ) + k (? ? ?dk )Eq [log ?dk ] + log ?(?dk ) P P P + ( k ? log ?( w ?kw ) + w (? ? ?kw )Eq [log ?kw ] + log ?(?kw ))/D (4) + log ?(K?) ? K log ?(?) + (log ?(W ?) ? W log ?(?))/D P , d `(nd , ?d , ?d , ?), where W is the size of the vocabulary and D is the number of documents. `(nd , ?d , ?d , ?) denotes the contribution of document d to the ELBO. L can be optimized using coordinate ascent over the variational parameters ?, ?, ? [7]: P P ?dwk ? exp{Eq [log ?dk ] + Eq [log ?kw ]}; ?dk = ? + w ndw ?dwk ; ?kw = ? + d ndw ?dwk . (5) The expectations under q of log ? and log ? are PK PW Eq [log ?dk ] = ?(?dk ) ? ?( i=1 ?di ); Eq [log ?kw ] = ?(?kw ) ? ?( i=1 ?ki ), (6) where ? denotes the digamma function (the first derivative of the logarithm of the gamma function). The updates in equation 5 are guaranteed to converge to a stationary point of the ELBO. By analogy to the Expectation-Maximization (EM) algorithm [14], we can partition these updates into an ?E? step?iteratively updating ? and ? until convergence, holding ? fixed?and an ?M? step?updating ? given ?. In practice, this algorithm converges to a better solution if we reinitialize ? and ? before each E step. Algorithm 1 outlines batch VB for LDA. 3 Algorithm 1 Batch variational Bayes for LDA Initialize ? randomly. while relative improvement in L(w, ?, ?, ?) > 0.00001 do E step: for d = 1 to D do Initialize ?dk = 1. (The constant 1 is arbitrary.) repeat Set ?dwk ? exp{E P q [log ?dk ] + Eq [log ?kw ]} Set ?dk = ? + w ?dwk ndw P 1 until K |change in?dk | < 0.00001 k end for M step: P Set ?kw = ? + d ndw ?dwk end while 2.2 Online variational inference for LDA Algorithm 1 has constant memory requirements and empirically converges faster than batch collapsed Gibbs sampling [3]. However, it still requires a full pass through the entire corpus each iteration. It can therefore be slow to apply to very large datasets, and is not naturally suited to settings where new data is constantly arriving. We propose an online variational inference algorithm for fitting ?, the parameters to the variational posterior over the topic distributions ?. Our algorithm is nearly as simple as the batch VB algorithm, but converges much faster for large datasets. A good setting of the topics ? is one for which the ELBO L is as high as possible after fitting the per-document variational parameters ? and ? with the E step defined in algorithm 1. Let ?(nd , ?) and ?(nd , ?) be the values of ?d and ?d produced by the E step. Our goal is to set ? to maximize P L(n, ?) , d `(nd , ?(nd , ?), ?(nd , ?), ?), (7) where `(nd , ?d , ?d , ?) is the dth document?s contribution to the variational bound in equation 4. This is analogous to the goal of least-squares matrix factorization, although the ELBO for LDA is less convenient to work with than a simple squared loss function such as the one in [10]. Online VB for LDA (?online LDA?) is described in algorithm 2. As the tth vector of word counts nt is observed, we perform an E step to find locally optimal values of ?t and ?t , holding ? fixed. ? the setting of ? that would be optimal (given ?t ) if our entire corpus consisted We then compute ?, of the single document nt repeated D times. D is the number of unique documents available to the algorithm, e.g. the size of a corpus. (In the true online case D ? ?, corresponding to empirical ? Bayes estimation of ?.) We then update ? using a weighted average of its previous value and ?. ?? ? is given by ?t , (?0 + t) , where ? ? (0.5, 1] controls the rate at which The weight given to ? ? are forgotten and ?0 ? 0 slows down the early iterations of the algorithm. The old values of ? condition that ? ? (0.5, 1] is needed to guarantee convergence. We show in section 2.3 that online LDA corresponds to a stochastic natural gradient algorithm on the variational objective L [15, 16]. This algorithm closely resembles one proposed in [16] for online VB on models with hidden data? the most important difference is that we use an approximate E step to optimize ?t and ?t , since we cannot compute the conditional distribution p(zt , ?t |?, nt , ?) exactly. Mini-batches. A common technique in stochastic learning is to consider multiple observations per ? using S > 1 observations: update to reduce noise [6, 17]. In online LDA, this means computing ? ? kw = ? + D P ntsk ?tskw , (8) ? s S where nts is the sth document in mini-batch t. The variational parameters ?ts and ?ts for this document are fit with a normal E step. Note that we recover batch VB when S = D and ? = 0. Hyperparameter estimation. In batch variational LDA, point estimates of the hyperparameters ? and ? can be fit given ? and ? using a linear-time Newton-Raphson method [7]. We can likewise 4 Algorithm 2 Online variational Bayes for LDA Define ?t , (?0 + t)?? Initialize ? randomly. for t = 0 to ? do E step: Initialize ?tk = 1. (The constant 1 is arbitrary.) repeat Set ?twk ? exp{E P q [log ?tk ] + Eq [log ?kw ]} Set ?tkP= ? + w ?twk ntw 1 until K k |change in?tk | < 0.00001 M step: ? kw = ? + Dntw ?twk Compute ? ? Set ? = (1 ? ?t )? + ?t ?. end for incorporate updates for ? and ? into online LDA: ? ? ? ? ?t ? ? (?t ); ? ? ? ? ?t ??(?), (9) where ? ? (?t ) is the inverse of the Hessian times the gradient ?? `(nt , ?t , ?t , ?), ??(?) is the inverse of the Hessian times the gradient ?? L, and ?t , (?0 + t)?? as elsewhere. 2.3 Analysis of convergence In this section we show that algorithm 2 converges to a stationary point of the objective defined in equation 7. Since variational inference replaces sampling with optimization, we can use results from stochastic optimization to analyze online LDA. Stochastic optimization algorithms optimize an objective using noisy estimates of its gradient [18]. Although there is no explicit gradient computation, algorithm 2 can be interpreted as a stochastic natural gradient algorithm [16, 15]. We begin by deriving a related first-order stochastic gradient algorithm for LDA. Let g(n) denote the population distribution over documents n from which we will repeatedly sample documents: PD 1 (10) g(n) , D d=1 I[n = nd ]. I[n = nd ] is 1 if n = nd and 0 otherwise. If this population consists of the D documents in the corpus, then we can rewrite equation 7 as L(g, ?) , DEg [`(n, ?(n, ?), ?(n, ?), ?)|?]. (11) where ` is defined as in equation 3. We can optimize equation 11 over ? by repeatedly drawing an observation nt ? g, computing ?t , ?(nt , ?) and ?t , ?(nt , ?), and applying the update ? ? ? + ?t D?? `(nt , ?t , ?t , ?) (12) where ?t , (?0 + t)?? as in algorithm 2. If we condition on the current value of ? and treat ?t and ?t as random variables document P drawn at the same time as each Pobserved P?nt , then ? Eg [D?? `(nt , ?t , ?t , ?)|?] = ?? d `(nd , ?d , ?d , ?). Thus, since t=0 ?t =P ? and t=0 ?2t < ?, the analysis in [19] shows both that ? converges and that the gradient ?? d `(nd , ?d , ?d , ?) converges to 0, and thus that ? converges to a stationary point.1 The update in equation 12 only makes use of first-order gradient information. Stochastic gradient algorithms can be sped up by multiplying the gradient by the inverse of an appropriate positive definite matrix H [19]. One choice for H is the Hessian of the objective function. In variational inference, an alternative is to use the Fisher information matrix of the variational distribution q (i.e., the Hessian of the log of the variational probability density function), which corresponds to using 1 Although we use a deterministic procedure to compute ? and ? given n and ?, this analysis can also be applied if ? and ? are optimized using a randomized algorithm. We address this case in the supplement. 5 a natural gradient method instead of a (quasi-) Newton method [16, 15]. Following the analysis in [16], the gradient of the per-document ELBO ` can be written as PW ?E [log ? ] ?`(nt ,?t ,?t ,?) = v=1 q??kw kv (??kv /D + ?/D + ntv ?tvk ) ??kw (13) 2 PW q(?k ) = v=1 ? ???log (?? /D + ?/D + n ? ), kv tv tvk ?? kv kw where we have used the fact that Eq [log ?kv ] is the derivative of the log-normalizer of q(log ?k ). By definition, multiplying equation 13 by the inverse of the Fisher information matrix yields   ?1 ? 2 log q(log ?k ) ?`(nt ,?t ,?t ,?) ? ??k ??T = ??kw /D + ?/D + ntw ?twk . (14) ??k k w Multiplying equation 14 by ?t D and adding it to ?kw yields the update for ? in algorithm 2. Thus we can interpret our algorithm as a stochastic natural gradient algorithm, as in [16]. 3 Related work Comparison with other stochastic learning algorithms. In the standard stochastic gradient optimization setup, the number of parameters to be fit does not depend on the number of observations [19]. However, some learning algorithms must also fit a set of per-observation parameters (such as the per-document variational parameters ?d and ?d in LDA). The problem is addressed by online coordinate ascent algorithms such as those described in [20, 21, 16, 17, 10]. The goal of these algorithms is to set the global parameters so that the objective is as good as possible once the perobservation parameters are optimized. Most of these approaches assume the computability of a unique optimum for the per-observation parameters, which is not available for LDA. Efficient sampling methods. Markov Chain Monte Carlo (MCMC) methods form one class of approximate inference algorithms for LDA. Collapsed Gibbs Sampling (CGS) is a popular MCMC approach that samples from the posterior over topic assignments z by repeatedly sampling the topic assignment zdi conditioned on the data and all other topic assignments [22]. One online MCMC approach adapts CGS by sampling topic assignments zdi based on the topic assignments and data for all previously analyzed words, instead of all other words in the corpus [23]. This algorithm is fast and has constant memory requirements, but is not guaranteed to converge to the posterior. Two alternative online MCMC approaches were considered in [24]. The first, called incremental LDA, periodically resamples the topic assignments for previously analyzed words. The second approach uses particle filtering instead of CGS. In a study in [24], none of these three online MCMC algorithms performed as well as batch CGS. Instead of online methods, the authors of [4] used parallel computing to apply LDA to large corpora. They developed two approximate parallel CGS schemes for LDA that gave similar predictive performance on held-out documents to batch CGS. However, they require parallel hardware, and their complexity and memory costs still scale linearly with the number of documents. Except for the algorithm in [23] (which is not guaranteed to converge), all of the MCMC algorithms described above have memory costs that scale linearly with the number of documents analyzed. By contrast, batch VB can be implemented using constant memory, and parallelizes easily. As we will show in the next section, its online counterpart is even faster. 4 Experiments We ran several experiments to evaluate online LDA?s efficiency and effectiveness. The first set of experiments compares algorithms 1 and 2 on static datasets. The second set of experiments evaluates online VB in the setting where new documents are constantly being observed. Both algorithms were implemented in Python using Numpy. The implementations are as similar as possible.2 2 Open-source Python implementations of batch and online LDA can be found at http://www.cs. princeton.edu/?mdhoffma. 6 Table 1: Best settings of ? and ?0 for various mini-batch sizes S, with resulting perplexities on Nature and Wikipedia corpora. Best parameter settings for Nature corpus 1 4 16 64 256 1024 0.9 0.8 0.8 0.7 0.6 0.5 1024 1024 1024 1024 1024 256 1132 1087 1052 1053 1042 1031 Best parameter settings for Wikipedia corpus 1 4 16 64 256 1024 0.9 0.9 0.8 0.7 0.6 0.5 1024 1024 1024 1024 1024 1024 675 640 611 595 588 584 S ? ?0 Perplexity S ? ?0 Perplexity 16384 0.5 1 1046 4096 0.5 64 580 16384 0.5 1 584 1000 2500 900 Batch size 00001 00016 00256 800 01024 04096 16384 batch10K 700 batch98K 2000 Batch size 00001 00016 00256 01024 04096 16384 batch10K batch98K Perplexity Perplexity 4096 0.5 64 1030 1500 600 101 102 103 104 101 Time in seconds (log scale) 102 103 104 Time in seconds (log scale) Figure 2: Held-out perplexity obtained on the Nature (left) and Wikipedia (right) corpora as a function of CPU time. For moderately large mini-batch sizes, online LDA finds solutions as good as those that the batch LDA finds, but with much less computation. When fit to a 10,000-document subset of the training corpus batch LDA?s speed improves, but its performance suffers. We use perplexity on held-out data as a measure of model fit. Perplexity is defined as the geometric mean of the inverse marginal probability of each word in the held-out set of documents: n P o P test ) perplexity(ntest , ?, ?) , exp ?( i log p(ntest |?, ?))/( (15) n i i,w iw where ni test denotes the vector of word counts for the ith document. Since we cannot directly compute log p(ntest i |?, ?), we use a lower bound on perplexity as a proxy: n P o P test ) . perplexity(ntest , ?, ?) ? exp ?( i Eq [log p(ntest , ? , z |?, ?)] ? E [log q(? , z )])( n i i q i i i i,w iw (16) The per-document parameters ?i and ?i for the variational distributions q(?i ) and q(zi ) are fit using the E step in algorithm 2. The topics ? are fit to a training set of documents and then held fixed. In all experiments ? and ? are fixed at 0.01 and the number of topics K = 100. There is some question as to the meaningfulness of perplexity as a metric for comparing different topic models [25]. Held-out likelihood metrics are nonetheless well suited to measuring how well an inference algorithm accomplishes the specific optimization task defined by a model. Evaluating learning parameters. Online LDA introduces several learning parameters: ? ? (0.5, 1], which controls how quickly old information is forgotten; ?0 ? 0, which downweights early iterations; and the mini-batch size S, which controls how many documents are used each iteration. Although online LDA converges to a stationary point for any valid ?, ?0 , and S, the quality of this stationary point and the speed of convergence may depend on how the learning parameters are set. We evaluated a range of settings of the learning parameters ?, ?0 , and S on two corpora: 352,549 documents from the journal Nature 3 and 100,000 documents downloaded from the English ver3 For the Nature articles, we removed all words not in a pruned vocabulary of 4,253 words. 7 sion of Wikipedia 4 . For each corpus, we set aside a 1,000-document test set and a separate 1,000-document validation set. We then ran online LDA for five hours on the remaining documents from each corpus for ? ? {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}, ?0 ? {1, 4, 16, 64, 256, 1024}, and S ? {1, 4, 16, 64, 256, 1024, 4096, 16384}, for a total of 288 runs per corpus. After five hours of CPU time, we computed perplexity on the test sets for the topics ? obtained at the end of each fit. Table 1 summarizes the best settings for each corpus of ? and ?0 for a range of settings of S. The supplement includes a more exhaustive summary. The best learning parameter settings for both corpora were ? = 0.5, ?0 = 64, and S = 4096. The best settings of ? and ?0 are consistent across the two corpora. For mini-batch sizes from 256 to 16384 there is little difference in perplexity scores. Several trends emerge from these results. Higher values of the learning rate ? and the downweighting parameter ?0 lead to better performance for small mini-batch sizes S, but worse performance for larger values of S. Mini-batch sizes of at least 256 documents outperform smaller mini-batch sizes. Comparing batch and online on fixed corpora. To compare batch LDA to online LDA, we evaluated held-out perplexity as a function of time on the Nature and Wikipedia corpora above. We tried various mini-batch sizes from 1 to 16,384, using the best learning parameters for each mini-batch size found in the previous study of the Nature corpus. We also evaluated batch LDA fit to a 10,000document subset of the training corpus. We computed perplexity on a separate validation set from the test set used in the previous experiment. Each algorithm ran for 24 hours of CPU time. Figure 2 summarizes the results. On the larger Nature corpus, online LDA finds a solution as good as the batch algorithm?s with much less computation. On the smaller Wikipedia corpus, the online algorithm finds a better solution than the batch algorithm does. The batch algorithm converges quickly on the 10,000-document corpora, but makes less accurate predictions on held-out documents. True online. To demonstrate the ability of online VB to perform in a true online setting, we wrote a Python script to continually download and analyze mini-batches of articles chosen at random from a list of approximately 3.3 million Wikipedia articles. This script can download and analyze about 60,000 articles an hour. It completed a pass through all 3.3 million articles in under three days. The amount of time needed to download an article and convert it to a vector of word counts is comparable to the amount of time that the online LDA algorithm takes to analyze it. We ran online LDA with ? = 0.5, ?0 = 1024, and S = 1024. Figure 1 shows the evolution of the perplexity obtained on the held-out validation set of 1,000 Wikipedia articles by the online algorithm as a function of number of articles seen. Shown for comparison is the perplexity obtained by the online algorithm (with the same parameters) fit to only 98,000 Wikipedia articles, and that obtained by the batch algorithm fit to the same 98,000 articles. The online algorithm outperforms the batch algorithm regardless of which training dataset is used, but it does best with access to a constant stream of novel documents. The batch algorithm?s failure to outperform the online algorithm on limited data may be due to stochastic gradient?s robustness to local optima [19]. The online algorithm converged after analyzing about half of the 3.3 million articles. Even one iteration of the batch algorithm over that many articles would have taken days. 5 Discussion We have developed online variational Bayes (VB) for LDA. This algorithm requires only a few more lines of code than the traditional batch VB of [7], and is handily applied to massive and streaming document collections. Online VB for LDA approximates the posterior as well as previous approaches in a fraction of the time. The approach we used to derive an online version of batch VB for LDA is general (and simple) enough to apply to a wide variety of hierarchical Bayesian models. Acknowledgments D.M. Blei is supported by ONR 175-6343, NSF CAREER 0745520, AFOSR 09NL202, the Alfred P. Sloan foundation, and a grant from Google. F. Bach is supported by ANR (MGA project). 4 For the Wikipedia articles, we removed all words not from a fixed vocabulary of 7,995 common words. This vocabulary was obtained by removing words less than 3 characters long from a list of the 10,000 most common words in Project Gutenberg texts obtained from http://en.wiktionary.org/wiki/Wiktionary:Frequency lists. 8 References [1] M. Braun and J. McAuliffe. Variational inference for large-scale models of discrete choice. arXiv, (0712.2526), 2008. [2] D. Blei and M. Jordan. Variational methods for the Dirichlet process. In Proc. 21st Int?l Conf. on Machine Learning, 2004. [3] A. Asuncion, M. Welling, P. Smyth, and Y.W. Teh. On smoothing and inference for topic models. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, 2009. [4] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed inference for latent Dirichlet allocation. In Neural Information Processing Systems, 2007. [5] Feng Yan, Ningyi Xu, and Yuan Qi. Parallel inference for latent Dirichlet allocation on graphics processing units. In Advances in Neural Information Processing Systems 22, pages 2134?2142, 2009. [6] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems, volume 20, pages 161?168. NIPS Foundation (http://books.nips.cc), 2008. [7] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, January 2003. [8] Hanna Wallach, David Mimno, and Andrew McCallum. Rethinking lda: Why priors matter. In Advances in Neural Information Processing Systems 22, pages 1973?1981, 2009. [9] W. Buntine. Variational extentions to EM and multinomial PCA. In European Conf. on Machine Learning, 2002. [10] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11(1):19?60, 2010. [11] L. Yao, D. Mimno, and A. McCallum. Efficient methods for topic model inference on streaming document collections. In KDD 2009: Proc. 15th ACM SIGKDD int?l Conf. on Knowledge discovery and data mining, pages 937?946, 2009. [12] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. Introduction to variational methods for graphical models. Machine Learning, 37:183?233, 1999. [13] H. Attias. A variational Bayesian framework for graphical models. In Advances in Neural Information Processing Systems 12, 2000. [14] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39:1?38, 1977. [15] L. Bottou and N. Murata. Stochastic approximations and efficient learning. The Handbook of Brain Theory and Neural Networks, Second edition. The MIT Press, Cambridge, MA, 2002. [16] M.A. Sato. Online model selection based on the variational Bayes. Neural Computation, 13(7):1649? 1681, 2001. [17] P. Liang and D. Klein. Online EM for unsupervised models. In Proc. Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 611?619, 2009. [18] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400?407, 1951. [19] L. Bottou. Online learning and stochastic approximations. Cambridge University Press, Cambridge, UK, 1998. [20] R.M. Neal and G.E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. Learning in graphical models, 89:355?368, 1998. [21] M.A. Sato and S. Ishii. On-line EM algorithm for the normalized Gaussian network. Neural Computation, 12(2):407?432, 2000. [22] T. Griffiths and M. Steyvers. Finding scientific topics. Proc. National Academy of Science, 2004. [23] X. Song, C.Y. Lin, B.L. Tseng, and M.T. Sun. Modeling and predicting personal information dissemination behavior. In KDD 2005: Proc. 11th ACM SIGKDD int?l Conf. on Knowledge discovery and data mining. ACM, 2005. [24] K.R. Canini, L. Shi, and T.L. Griffiths. Online inference of topics with latent Dirichlet allocation. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 5, 2009. [25] J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. Blei. Reading tea leaves: How humans interpret topic models. In Advances in Neural Information Processing Systems 21 (NIPS), 2009. 9
3902 |@word version:1 pw:3 proportion:1 nd:13 open:1 seek:1 tried:1 series:1 score:1 ecole:1 document:57 outperforms:1 current:1 wd:1 nt:13 comparing:2 must:2 written:1 periodically:1 subsequent:1 partition:1 kdd:2 update:8 aside:1 stationary:8 generative:1 selected:1 half:1 intelligence:2 leaf:1 mccallum:2 ith:1 blei:6 org:1 simpler:1 five:2 mathematical:1 mga:1 become:2 yuan:1 shorthand:1 consists:1 fitting:5 market:2 behavior:1 brain:1 company:12 cpu:3 little:1 begin:1 estimating:1 moreover:1 project:2 factorized:1 interpreted:1 developed:2 finding:1 nj:2 impractical:1 guarantee:1 forgotten:2 sapiro:1 braun:1 exactly:1 uk:1 control:3 unit:1 grant:1 continually:1 mcauliffe:1 positive:1 before:1 service:11 local:2 treat:1 mainstay:1 analyzing:2 approximately:1 inria:1 twice:1 resembles:1 wallach:1 collect:1 revolves:1 factorization:4 limited:1 range:2 unique:3 practical:1 acknowledgment:1 practice:1 definite:1 procedure:1 area:1 empirical:1 yan:1 convenient:1 boyd:1 word:21 road:2 griffith:2 get:1 cannot:4 close:1 selection:1 collapsed:2 applying:2 optimize:4 equivalent:1 deterministic:1 www:1 shi:1 maximizing:1 regardless:1 simplicity:1 zdi:6 deriving:1 steyvers:1 population:2 coordinate:2 analogous:1 annals:1 massive:6 smyth:2 us:1 trend:1 approximated:2 particularly:1 updating:3 observed:4 bottom:1 perdocument:1 wang:1 sun:1 removed:2 ran:4 pd:1 dempster:1 complexity:1 moderately:1 personal:1 depend:2 rewrite:1 predictive:1 efficiency:1 easily:1 various:2 chapter:1 fast:1 effective:1 monte:3 artificial:2 newman:1 exhaustive:1 whose:1 larger:3 relax:1 elbo:7 otherwise:1 drawing:1 ability:1 statistic:3 anr:1 think:1 noisy:1 laird:1 online:71 propose:1 fr:1 parallelizes:1 relevant:1 ntw:2 adapts:1 academy:1 kv:5 billion:4 convergence:4 optimum:3 requirement:2 produce:1 incremental:2 converges:12 tk:3 help:1 derive:3 develop:3 andrew:1 eq:21 implemented:2 c:3 closely:1 annotated:1 stochastic:16 exploration:1 human:2 public:2 require:1 summation:1 extension:1 twk:4 around:1 considered:1 wdi:3 normal:1 exp:5 matthew:1 dictionary:1 early:2 estimation:2 proc:5 iw:2 robbins:1 weighted:1 hoffman:1 brought:1 mit:1 gaussian:1 normale:1 sion:1 factorizes:1 jaakkola:1 encode:1 focus:1 ponce:1 improvement:1 likelihood:2 digamma:1 normalizer:1 contrast:1 sigkdd:2 ishii:1 inference:21 streaming:2 entire:2 hidden:1 expand:1 quasi:1 france:1 issue:1 smoothing:1 initialize:4 marginal:1 once:1 ng:1 sampling:9 kw:20 look:1 unsupervised:1 nearly:1 others:1 few:1 modern:1 randomly:2 gamma:1 divergence:2 national:2 numpy:1 interest:2 mining:2 introduces:2 analyzed:5 held:10 chain:4 accurate:2 indexed:1 incomplete:1 old:2 logarithm:1 industry:5 modeling:6 measuring:1 assignment:9 maximization:1 cost:3 subset:2 examining:2 gutenberg:1 graphic:1 buntine:1 st:1 density:1 international:1 randomized:1 probabilistic:4 quickly:3 yao:1 squared:2 central:1 management:3 opposed:1 choose:1 worse:1 conf:4 book:1 american:1 derivative:2 summarized:1 coding:1 includes:1 int:3 matter:1 north:1 sloan:1 stream:3 performed:1 script:2 view:1 analyze:7 francis:2 sup:1 bayes:12 option:1 recover:1 parallel:4 asuncion:2 monro:1 contribution:2 square:1 ni:1 ningyi:1 efficiently:1 likewise:1 yield:2 murata:1 bayesian:11 produced:1 none:1 carlo:3 multiplying:3 researcher:1 cc:1 converged:1 suffers:1 definition:1 evaluates:1 failure:1 nonetheless:2 frequency:1 naturally:1 di:1 static:1 dataset:2 popular:1 knowledge:2 improves:1 appears:2 higher:1 day:2 formulation:1 evaluated:3 until:3 hand:1 google:1 defines:1 lda:56 quality:1 scientific:1 consisted:1 true:4 normalized:1 counterpart:1 evolution:2 mdhoffma:2 symmetric:1 leibler:1 iteratively:1 semantic:1 neal:1 eg:1 attractive:1 outline:1 demonstrate:2 resamples:1 variational:39 novel:1 wikipedia:15 common:3 multinomial:3 sped:1 empirically:2 volume:2 million:4 association:1 interpretation:1 approximates:1 interpret:2 significant:1 refer:1 cambridge:3 gibbs:2 erieure:1 particle:1 language:3 access:1 ndw:7 posterior:22 optimizes:1 perplexity:20 store:1 onr:1 seen:3 analyzes:1 care:1 accomplishes:1 converge:4 maximize:2 full:1 multiple:1 faster:6 downweights:1 bach:4 raphson:1 long:1 lin:1 divided:1 qi:1 prediction:2 scalable:1 variant:1 expectation:3 metric:2 arxiv:1 iteration:6 whereas:1 addressed:1 source:1 ascent:2 effectiveness:1 jordan:3 easy:1 enough:1 iterate:1 variety:1 fit:13 gave:1 zi:1 reduce:1 tradeoff:1 attias:1 pca:2 song:1 hessian:4 repeatedly:3 dramatically:1 amount:2 locally:2 hardware:1 category:1 handily:3 simplest:1 generate:1 tth:1 http:3 outperform:2 wiki:1 nsf:1 notice:1 per:11 klein:1 zd:3 alfred:1 discrete:1 hyperparameter:1 promise:1 tea:1 drawn:2 downweighting:1 computability:1 fraction:3 sum:1 convert:1 run:4 inverse:5 parameterized:3 uncertainty:1 arrive:1 draw:3 summarizes:2 announced:1 vb:25 comparable:1 bound:3 ki:1 guaranteed:3 replaces:1 annual:1 sato:2 tvk:2 bousquet:1 speed:2 pruned:1 department:2 developing:1 tv:1 dissemination:1 across:1 smaller:2 em:6 character:1 sth:1 taken:1 computationally:1 equation:10 previously:2 count:5 needed:2 end:6 available:2 eight:1 apply:3 hierarchical:5 appropriate:1 batch:47 alternative:2 robustness:1 assumes:2 dirichlet:15 top:2 denotes:3 remaining:1 completed:1 graphical:3 linguistics:1 newton:2 ghahramani:1 meaningfulness:1 society:1 feng:1 objective:10 question:1 reinitialize:1 parametric:1 traditional:2 gradient:17 separate:2 rethinking:1 topic:48 tseng:1 code:1 index:1 mini:12 minimizing:1 liang:1 difficult:1 setup:1 holding:2 slows:1 implementation:2 zt:1 perform:2 teh:1 observation:8 markov:4 datasets:6 discarded:1 t:2 january:1 canini:1 hinton:1 communication:2 looking:1 arbitrary:2 download:3 david:2 paris:1 kl:1 optimized:4 hour:4 nip:3 address:1 dth:1 proceeds:1 usually:3 reading:1 summarize:1 challenge:1 including:3 memory:5 royal:1 natural:6 business:6 predicting:1 scheme:1 technology:1 health:3 text:2 prior:2 review:1 geometric:1 python:3 discovery:2 relative:1 afosr:1 fully:1 loss:1 allocation:9 filtering:1 analogy:1 validation:3 foundation:2 downloaded:1 proxy:1 consistent:1 article:19 rubin:1 elsewhere:1 summary:1 repeat:2 supported:2 free:1 arriving:2 english:1 bias:1 fall:1 wide:2 face:1 saul:1 emerge:1 sparse:2 distributed:1 mimno:2 vocabulary:5 evaluating:1 valid:1 author:1 collection:9 made:1 simplified:1 welling:2 approximate:8 kullback:1 wrote:1 deg:1 global:1 active:1 reveals:3 mairal:1 corpus:28 handbook:1 assumed:1 latent:13 why:1 table:2 nature:8 career:1 hanna:1 bottou:3 complex:2 european:1 pk:1 linearly:2 noise:1 hyperparameters:1 edition:1 repeated:1 graber:1 xu:1 west:1 en:2 slow:1 explicit:1 cgs:6 down:1 removing:1 specific:1 appeal:1 dk:14 list:3 evidence:1 intractable:1 adding:1 supplement:2 conditioned:3 justifies:1 suited:2 chang:1 corresponds:2 gerrish:1 relies:1 constantly:2 acm:3 ma:1 conditional:1 goal:3 fisher:2 change:2 except:1 dwk:8 called:2 total:1 pas:3 ntest:5 incorporate:1 evaluate:1 mcmc:11 princeton:7
3,205
3,903
Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories George Konidaris? Scott Kuindersma?? Andrew Barto? Roderic Grupen? Autonomous Learning Laboratory? Laboratory for Perceptual Robotics? Computer Science Department, University of Massachusetts Amherst {gdk, scottk, barto, grupen}@cs.umass.edu Abstract We introduce CST, an algorithm for constructing skill trees from demonstration trajectories in continuous reinforcement learning domains. CST uses a changepoint detection method to segment each trajectory into a skill chain by detecting a change of appropriate abstraction, or that a segment is too complex to model as a single skill. The skill chains from each trajectory are then merged to form a skill tree. We demonstrate that CST constructs an appropriate skill tree that can be further refined through learning in a challenging continuous domain, and that it can be used to segment demonstration trajectories on a mobile manipulator into chains of skills where each skill is assigned an appropriate abstraction. 1 Introduction Hierarchical reinforcement learning [1] offers an appealing family of approaches to scaling up standard reinforcement learning (RL) [2] methods by enabling the use of both low-level primitive actions and higher-level macro-actions (or skills). A core research goal in hierarchical RL is the development of methods by which an agent can autonomously acquire its own high-level skills. Recently, Konidaris and Barto [3] introduced a general method for skill discovery in continuous RL domains called skill chaining. Skill chaining adaptively segments complex policies into skills that can be executed sequentially and that are easier to represent and learn. It can be coupled with abstraction selection [4] to select skill-specific abstractions, which can aid in acquiring policies that are high-dimensional when represented monolithically, but can be broken into subpolicies that can be defined over far fewer variables. Unfortunately, performing skill chaining iteratively is slow: it creates skills sequentially, and requires several episodes to learn a new skill policy followed by several further episodes to learn by trial and error where it can be executed successfully. While this is reasonable for many problems, in domains where experience is expensive (such as robotics) we require a faster method. Moreover, with the growing realization that learning policies completely from scratch in such domains is infeasible, we may also need to bootstrap learning through a method that provides a reasonable initial policy such as learning from demonstration [5], sequencing existing controllers [6], using a kinematic planner, or using a feedback controller [7]. We introduce CST, a new skill acquisition method that can build skill trees (with appropriate abstractions) from a set of sample solution trajectories obtained from demonstration, a planner, or a controller. CST uses an incremental MAP changepoint detection method [8] to segment each solution trajectory into skills and then merges the resulting skill chains into a skill tree. The time complexity of CST is controlled through the use of a particle filter. We show that CST can construct a skill tree from human demonstration trajectories in Pinball, a challenging dynamic continuous domain, and that the resulting skills can be refined using RL. We further show that it can be used to segment demonstration trajectories from a mobile manipulator into chains of skills, where each skill is assigned an appropriate abstraction. 1 2 2.1 Background Hierarchical Reinforcement Learning and the Options Framework The options framework [9] adds methods for hierarchical planning and learning using temporallyextended actions to the standard RL framework. Rather than restricting the agent to selecting actions that take a single time step to complete, it models higher-level decision making using options: actions that have their own policies and which may require multiple time steps to complete. An option, o, consists of three components: an option policy, ?o , giving the probability of executing each action in each state in which the option is defined; an initiation set indicator function, Io , which is 1 for states where the option can be executed and 0 elsewhere; and a termination condition, ?o , giving the probability of option execution terminating in states where the option is defined. Options can be added to an agent?s action repertoire alongside its primitive actions, and the agent chooses when to execute them in the same way it chooses when to execute primitive actions. Methods for creating new options must determine when to create an option, how to define its termination condition (skill discovery), how to define its initiation set, and how to learn its policy. Given an option reward function, policy learning can be viewed as just another RL problem. Creation and termination are typically performed by the identification of option goal states, with an option created to reach a goal state and then terminate. The initiation set is then the set of states from which a goal state can be reached. Although there are many skill discovery methods for discrete domains, very few exist for continuous domains. To the best of our knowledge (see Section 6), skill chaining [3] is the only such method that does not make any assumptions about the domain structure. 2.2 Skill Chaining and Abstraction Selection Skill chaining mirrors an idea present in other control fields?for example, in robotics a similar idea is known as pre-image backchaining [10, 11], and in control for chaotic systems as adaptive targeting [12]. Given a continuous RL problem where the policy is either too difficult to learn directly or too complex to represent monolithically, we construct a skill tree such that we can obtain a trajectory from every start state to a solution state by executing a sequence (or chain) of acquired skills. This is accomplished as follows. The agent starts with an initial list of target events (regions of the state space), T , which in most cases consists simply of the solution regions of the problem. It then performs RL as usual to try to learn a reasonable policy for the problem. When the agent triggers some target event, To ?which occurs when it moves from a state not contained in any event in T to one contained in To ?it creates a new option, o, with the goal of reaching To . As the agent continues to interact with the environment it learns a policy for o, and adds it to its set of available actions. Initially, o has an initiation set that covers the whole state space. Over time, some executions of o will succeed (the agent reaches To ), and some will fail. The agent uses these states as training examples and learns Io , the initiation set of o, using a classifier. When learning has converged, Io is added to T as a new target event. An agent applying this method along a single trajectory will slowly learn a chain of skills that grows backward from the task goal region towards the start region (as depicted in Figure 1). More generally, multiple trajectories, noise in control, stochasticity in the environment, or simple variance will result in skill trees rather than skill chains because more than one option will be created to reach some target events. Eventually, the entire state space is covered by acquired skills. A more detailed description can be found in Konidaris and Barto [3]. (a) (b) (c) (d) Figure 1: An agent creates options using skill chaining. (a) First, the agent encounters a target event and creates an option to reach it. (b) Entering the initiation set of this first option triggers the creation of a second option whose target is the initiation set of the first option. (c) Finally, after many trajectories the agent has created a chain of options to reach the original target. (d) When multiple options are created to target an initiation set, the chain splits and the agent creates a skill tree. 2 The major advantage of skill chaining is that it provides a mechanism for the agent to adaptively represent a complex policy using a collection of simpler policies. We can take this further and allow each individual option policy to use its own state abstraction. In this way, we may be able to represent high-dimensional policies using component policies that are low-dimensional (and therefore feasible to learn). For example, a complex policy like driving to school in the morning, that requires far too many features to be easily represented monolithically, may be broken into component tasks (such as walking to the car, opening the door, inserting the key, etc.) that do not. Abstraction selection [4] is a simple mechanism for achieving this. Given a library of possible abstractions, and a set of sample trajectories (as, for example, obtained when initially learning an option policy), abstraction selection finds the abstraction best able to represent the value function inferred from the sample trajectories. It can be combined with skill chaining to learn a skill tree where each skill has its own abstraction; in such cases, the initiation set of each skill will be restricted to states where its policy can be well-represented using its abstraction. 2.3 Changepoint Detection Skill chaining learns a segmented policy by creating a new option when either the most suitable abstraction changes, or the value function (and therefore policy) becomes too complex to represent with a single option. We would like to segment an entire trajectory at once; the question then becomes: how many options exist along it, and where do they begin and end? This can be modeled as a multiple changepoint detection problem [8]. In this setting, we are given observed data and a set of candidate models. The data are segmented such that the data within a segment are generated by a single model. We are to infer the number of changepoints and their positions, and select and fit an appropriate model for each segment. Figure 2 shows a simple example. 200 200 150 150 100 100 50 50 0 0 5 10 15 20 25 30 35 40 45 50 55 60 5 10 15 20 25 30 35 40 45 50 55 60 Figure 2: Data with multiple segments. The observed data (left) are generated by three different models (solid line, changepoints shown using dashed lines, right) plus noise. The first and third segments are generated by linear models, whereas the second is quadratic. Unlike the standard regression setting, in RL our data is sequentially but not necessarily spatially segmented, and we would like to perform changepoint detection online?processing transitions as they occur and then discarding them. Fearnhead and Liu [8] introduced online algorithms for both Bayesian and MAP changepoint detection; we use the simpler method that obtains the MAP changepoints and models via an online Viterbi algorithm. The changepoint process is implemented as follows. We observe data tuples (xt , yt ), for times t ? [1, T ], and are given a set of models Q with prior p(q ? Q). We model the marginal probability Pl of a segment length l with PMF g(l) and CDF G(l) = i=1 g(i). Finally, we assume that we can fit a segment from time j + 1 to t using model q to obtain the probability of the data P (j, t, q) conditioned on q. This results in a Hidden Markov Model where the hidden state at time t is the model qt and the observed data is yt given xt . The hidden state transition probability from time i to time j with model q is given by g(j ? i ? 1)p(q) (reflecting the probability of a segment of length j ? i ? 1 and the prior for q). The probability of an observed data segment starting at time i + 1 and continuing through j using q is P (i, j, q)(1 ? G(j ? i ? 1)), reflecting the fit probability and the probability of a segment of at least j ? i ? 1 steps. Note that a transition between two instances of the same model (but with different parameters) is possible. We can thus use an online Viterbi algorithm to compute Pt (j, q), the probability of the changepoint previous to time t occuring at time j using model q: Pt (j, q) = (1 ? G(t ? j ? 1))P (j, t, q)p(q)PjM AP , and 3 (1) PjM AP = max i,q Pj (i, q)g(j ? i) , ?j < t. 1 ? G(j ? i ? 1) (2) At time j, the i and q maximizing Equation 2 are the MAP changepoint position and model for the current segment, respectively. We then perform this procedure for time i, repeating until we reach time 1, to obtain the changepoints and models for the entire sequence. Thus, at each time step t we compute Pt (j, q) for each model q and changepoint time j < t (using PjM AP ) and then compute PtM AP and store it.1 This requires O(T ) storage and O(T L|Q|) time per timestep, where L is the time required to compute P (j, t, q). We can reduce L to a constant for most models of interest by storing a small sufficient statistic and updating it incrementally in time independent of t, obtaining P (j, t, q) from P (j, t ? 1, q). In addition, since most Pt (j, q) values will be close to zero, we can employ a particle filter to discard most combinations of j and q and retain a constant number per timestep. Each particle then stores j, q, PjM AP , sufficient statistics and its Viterbi path. We use the Stratified Optimal Resampling algorithm of Fearnhead and Liu [8] to filter down to M particles whenever the number of particles reaches N . This results in a time complexity of O(N L) and storage complexity of O(N c), where there are O(c) changepoints in the data. 3 Constructing Skill Trees from Sample Trajectories We propose using changepoint detection to segment sample trajectories into skills, using return Rt (sum of discounted reward) as the target variable. This provides an intuitive mapping to RL since a value function is simply an estimator of return; segmentation based on return thus provides a natural way to segment the value function implied by a trajectory into simpler value functions, or to detect a change in model (and therefore abstraction). To do so, we must select an appropriate model of expected skill (segment) length, and an appropriate model for fitting the data. We assume a geometric distribution for skill lengths with parameter p, so that g(l) = (1 ? p)l?1 p and G(l) = (1 ? (1 ? p)l ). This gives us a natural way to set p since p = k1 , where k is the expected skill length. Since RL in continuous state spaces usually employs linear function approximation, it is natural to use a linear regression model with Gaussian noise as our model of the data. Following Fearnhead and Liu [8], we assume conjugate priors: the Gaussian noise prior has mean zero, and variance with inverse gamma prior with parameters v2 and u2 . The prior for each weight is a zero-mean Gaussian with variance ? 2 ?. Integrating the likelihood function over the parameters obtains: n P (j, t, q) = v ?( n+v ?? 2 u2 ?1 21 2 ) |(A + D) | , n+v v m ? (y + u) 2 ?( 2 ) (3) where n = t ? j, q has m basis functions, ? is the Gamma function, D is an m by m matrix with ? ?1 on the diagonal and zeros elsewhere, and: A= t X ?(xi )?(xi ) T t X y=( Ri2 ) ? bT (A + D)?1 b, i=j (4) i=j where ?(xi ) is a vector of m basis functions evaluated at state xi , Ri = Pt obtained from state i, and b = i=j Ri ?(xi ). PT j=i ? j?i rj is the return Note that we are using each Rt as the target regression variable in this formulation, even though we only observe rtP for each state. However, to compute Equation 3 we need only retain sufficient statist tics A, b and ( i=j Ri2 ). Each can be updated incrementally using rt (the latter two using traces). Thus, the sufficient statistics required to obtain the fit probability can be computed incrementally and online at each timestep, without requiring any transition data to be stored. Furthermore, A and b are the same matrices used for performing a least-squares fit to the data using Rt as the regression target. They can thus be used to produce a value function fit (equivalent to a least-squares Monte Carlo estimate) for the skill segment if so desired; again, without the need to store the trajectory. Using this model we can segment a single trajectory into a skill chain; given multiple skill chains from different trajectories, we would like to merge them into a skill tree. We merge two trajectory 1 In practice all equations are computed in log form to ensure numerical stability. 4 segments by assigning them to the same skill (rather than two distinct skills). Since we wish to build skills that can be sequentially executed, we can only consider merging two segments when they have the same target?which means that the segments immediately following each of them have been merged. Since we assume that all trajectories have the same final goal, we merge two chains by starting at their final skill segments. For each pair of segments, we determine whether or not they are a good statistical match, and if so merging them, repeating this process until we fail to merge a pair of skill segments, after which the remaining skill chains branch off on their own. Since P (j, t, q) as defined in Equation 3 is the integration over the likelihood function of our model given segment data, we can reuse it as a measure of whether a pair of trajectories are better modeled as one skill (where we simply sum their sufficient statistics), or as two separate skills (forming new sufficient statistics using two groups of basis functions, each of which is zero over the other?s data segments). Before merging, we perform a fast test to ensure that the trajectory pairs actually overlap in state space?if they do not, we will often be able to represent them both simultaneously with low error and hence our metric may incorrectly suggest a merge. Segmenting a sample trajectory should be performed using a lower-order function approximator than that used for policy learning, since we see merely a single trajectory sample rather than a dense sample over the state space. However, merging should be performed using the same function approximator used for learning. This necessitates the maintenance of two sets of sufficient statistics during segmentation; fortunately, the majority of time is consumed computing P (j, t, q), which during segmentation is only required using the lower-order approximator. If we are to merge skills obtained over multiple trajectories into trees, we require the component skills to be aligned, meaning that the changepoints occur in roughly the same places. This will occur naturally in domains where changepoints are primarily caused by a change in relevant abstraction. When this is not the case, they may vary since segmentation is then based on function approximation boundaries, and hence two trajectories segmented independently may be poorly aligned. Therefore, when segmenting two trajectories sequentially in anticipation of a merge, we may wish to include a bias on changepoint locations in the second trajectory. We model this bias as a Mixture of Gaussians, centering an isotropic Gaussian at each location in state-space where a changepoint previously occurred. We can include this bias during changepoint detection by multiplying Equation 1 with the resulting PDF evaluated at the current state. 4 Acquiring Skills from Human Demonstration in the PinBall Domain The Pinball domain is a continuous domain with dynamic aspects, sharp discontinuities, and extended control characteristics that make it difficult for control and function approximation.2 Previous experiments have shown that skill chaining is able to find a very good policy while flat learning finds a poor solution [3]. In this section, we evaluate the performance benefits obtained using a skill tree generated from a pair of human-provided solution trajectories. The goal of PinBall is to maneuver the small ball (which starts in one of two places) into the large red hole. The ball is dynamic (drag coefficient 0.995), so its state is described by four variables: x, y, x? and y. ? Collisions with obstacles are fully elastic and cause the ball to bounce, so rather than merely avoiding obstacles the agent may choose to use them to efficiently reach the hole. There are five primitive actions: incrementing or decrementing x? or y? by a small amount (which incurs a reward of ?5 per action), or leaving them unchanged (which incurs a reward of ?1 per action); reaching the goal obtains a reward of 10, 000. We use the Pinball domain instance shown in Figure 3 with 5 pairs (one trajectory in each pair for each start state) of human demonstration trajectories. 4.1 Implementation Details Overall task learning for both standard and option-learning agents used linear FA Sarsa (? = 1,  = 0.01) using a 5th-order Fourier basis [13] with ? = 0.0005. Option policy learning used Q-learning (?o = 0.0005, ? = 1,  = 0.01) with a 3rd-order Fourier basis. Initiation sets were learned using logistic regression using 2nd order polynomial features with learning rate ? = 0.1 and 100 sweeps per new data point. Other parameters were as in Konidaris and Barto [3]. 2 Java source code for Pinball can be downloaded at http://www-all.cs.umass.edu/? gdk/pinball 5 CST used an expected skill length of 100, ? = 0.0001, particle filter parameters N = 30 and M = 50, and a first-order Fourier Basis (16 basis functions). After segmenting the first trajectory we used isotropic Gaussians with variance 0.52 to bias the segmentation of the second. The full 3rd-order Fourier basis representation was used for merging. To obtain a fair comparison to skill chaining, we initialized the CST skill policies using 10 episodes of experience replay of the demonstrated trajectories, rather than using the sufficient statistics to perform a least-squares value function fit. 4.2 Results Trajectory segmentation was successful for all demonstration trajectories, and all pairs were merged successfully into skill trees when the alignment bias was used to segment the second trajectory in the pair (two of the five could not be merged due to misalignments when the bias was not used). Example segmentations and the resulting merged trajectories are shown in Figure 3. (a) (b) (c) (d) Figure 3: The Pinball instance used in our experiment (a), along with segmented skill chains from a pair of sample solution trajectories (b and c), and the assignments obtained when the two chains are merged (d). 4 5 x 10 Return 0 ?5 Pre?learned Skill Chaining Skill Tree ?10 ?15 ?20 20 40 60 80 100 120 140 Episodes Figure 4: Learning curves in the PinBall domain, for agents employing skill trees created from demonstration trajectories, skill chaining agents, and agents starting with pre-learned skills. The learning curves obtained using the resulting skill trees to RL agents are shown in Figure 4. These results compare the learning curves of CST agents, agents that perform skill chaining from scratch, and agents that are given fully pre-learned skills (obtained over 250 episodes of skill chaining). They show that the CST policies are not good enough to use immediately, as the agents do worse than those given pre-learned skills for the first few episodes. However, very shortly thereafter the CST agents are able to learn excellent policies?immediately performing much better than skill chaining agents, and shortly thereafter even exceeding the performance of agents with pre-learned skills. This is likely because the skill tree structure obtained from demonstration has fewer but better skills than that learned incrementally by skill chaining agents. In addition, segmenting demonstration trajectories into skills results in much faster learning than attempting to acquire the entire policy by demonstration at once. The learning curve for agents that first perform experience replay on the overall task value function and then proceed using skill chaining (not shown) is virtually identical to that of agents performing skill chaining from scratch. 6 5 Acquiring Skills from Human Demonstration on the uBot In this section we show that CST is able to create skill chains and select appropriate abstractions for each skill from human demonstration on the uBot-5, a dynamically balancing mobile manipulator. Demonstration trajectories are obtained from an expert human operator, controlling the uBot as it enters a corridor, approaches a door, pushes the door open, turns right into a new corridor, and finally approaches and pushes on a panel (illustrated in Figure 5). (a) (b) (c) (d) Figure 5: Starting at the beginning of a corridor (a), the uBot must approach and push open a door (b), turn through the doorway (c), then approach and push a panel (d). To simplify perception, the uBot uses colored purple, orange and yellow circles placed on the door and panel, beginning of the back wall, and middle of the back wall, respectively, as perceptually salient markers indicating the centroid of each object. The distances (obtained using stereo vision) between the uBot and each marker are computed at 8Hz and filtered. The uBot is able to engage one of two motor abstractions at a time: either performing end-point position control of its hand, or controlling the speed and angle of its forward motion. Thus, we constructed six sensorimotor abstractions, each containing either the differences between the arm endpoint position and marker position, or the distance to and angle between the robot?s torso and the object. We assume a reward function of ?1 every 10th of a second. # Abstraction Description a b c d e f torso-purple endpoint-purple torso-orange torso-yellow torso-purple endpoint-purple Drive to door. Open door. Drive toward wall. Turn. Drive to panel. Press panel. Trajectories Required 2 1 1 2 1 3 Figure 6: A demonstration trajectory segmented into skills, each with an appropriate abstraction. We gathered 12 demonstration trajectories from the uBot, of which 3 had to be discarded because the perceptual features were too noisy. Of the remaining 9, all segmented sensibly and 8 were able to be merged into a single skill chain. An example segmentation corresponding to this chain is shown in Figure 6 along with the abstractions selected, a brief description of each skill segment, and the number of sample trajectories required before the skill policy (learned using ridge regression with a 5th order Fourier basis) could be replayed successfully 9 times out of 10. This shows that CST is able to segment trajectories obtained from a robot platform, select an appropriate abstraction in each case, and then replay the resulting policies using a small number of sample trajectories. 6 Related Work Several methods exist for skill discovery in discrete reinforcement learning domains; the most recent relevant work is by Mehta et al. [14], which induces task hierarchies from demonstration trajectories 7 in discrete domains, but assumes a factored MDP with given dynamic Bayes network action models. By contrast, we know of very little work on skill acquisition in continuous domains where the skills or action hierarchy are not designed in advance. Mugan and Kuipers [15] use learned qualitativelydiscretized factored models of a continuous state space to derive options, which is only feasible and appropriate in some settings. In Neumann et al. [16], an agent learns to solve a complex task by sequencing task-specific parametrized motion templates. Finally, Tedrake [17] builds a similar tree to ours in the model-based control setting. A sequence of policies represented using linear function approximators may be considered a switching linear dynamical system. Methods exist for learning such systems from data [18, 19]; these methods are able to handle multivariate target variables and models that repeat in the sequence. However, they are consequently more complex and computationally intensive than the much simpler changepoint detection method we use, and they have not been used in the context of skill acquisition. A great deal of work exists in robotics under the general heading of learning from demonstration [5], where control policies are learned using sample trajectories obtained from a human, robot demonstrator, or a planner. Most methods learn an entire single policy from data, although some perform segmentation?for example, Jenkins and Matari?c [20] segment demonstrated data into motion primitives, and thereby build a motion primitive library. They perform segmentation using a heuristic specific to human-like kinematic motions; more recent work has used more principled statistical methods [21, 22] to segment the data into multiple models as a way to avoid perceptual aliasing in the policy. Other methods use demonstration to provide an initial policy that is then refined using reinforcement learning (e.g., Peters and Schaal [23]). Prior to our work, we know of no existing method that both performs trajectory segmentation and results in motion primitives suitable for further learning. 7 Discussion and Conclusions CST makes several key assumptions. The first is that the demonstrated skills form a tree, when in some cases they may form a more general graph (e.g., when the demonstrated policy has a loop). A straightforward modification of the procedure to merge skill chains could accommodate such cases. We also assume that the domain reward function is known and that each option reward can be obtained from it by adding in a termination reward. A method for using inferred reward functions (e.g., Abbeel and Ng [24]) could be incorporated into our method when this is not true. However, this requires segmentation based on policy rather than value function, since rewards are not given at demonstration time. Because policies are usually multivariate, this would require a multivariate changepoint detection algorithm, such as that by Xuan and Murphy [18]. Finally, we assume that the best model for combining a pair of skills is the model selected for representing both individually. This may not always hold?two skills best represented individually by one model may be better represented together using another (perhaps more general) one. Since the correct abstraction would presumably be at least competitive during segmentation, such cases can be resolved by considering segmentations other than the final MAP solution when merging. Segmenting demonstration trajectories into skills has several advantages. Each skill is allocated its own abstraction, and therefore can be learned and represented efficiently?potentially allowing us to learn higher dimensional, extended policies. During learning, an unsuccessful or partial episode can still improve skills whose goals where nevertheless reached. Confidence-based learning methods [25] can be applied to each skill individually. Finally, skills learned using agent-centric features (such as in our uBot example) can be transferred to new problems [26], and thereby detached from a problem-specific setting to be more generally useful. Taken together, these advantages, in conjunction with the application of CST to bootstrap skill policy acquisition, may prove crucial to scaling up policy learning methods to high-dimensional, continuous domains. Acknowledgements We would like to thank Dan Xie and Dirk Ruiken for their invaluable help with the uBot, and Phil Thomas and Brenna Argall for useful discussions. Andrew Barto and George Konidaris were supported by the Air Force Office of Scientific Research under grant FA9550-08-1-0418. Scott Kuindersma is supported by a NASA GSRP fellowship. 8 References [1] A.G. Barto and S. Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13:41?77, 2003. Special Issue on Reinforcement Learning. [2] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. [3] G.D. Konidaris and A.G. Barto. Skill discovery in continuous reinforcement learning domains using skill chaining. In Advances in Neural Information Processing Systems 22, pages 1015?1023, 2009. [4] G.D. Konidaris and A.G. Barto. Efficient skill learning using abstraction selection. In Proceedings of the Twenty First International Joint Conference on Artificial Intelligence, July 2009. [5] B. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57:469?483, 2009. [6] M. Huber and R.A. Grupen. A feedback control structure for on-line learning tasks. Robotics and Autonomous Systems, 22(3-4):303?315, 1997. [7] M. Rosenstein and A.G. Barto. Supervised actor-critic reinforcement learning. In J. Si, A.G. Barto, A. Powell, and D. Wunsch, editors, Learning and Approximate Dynamic Programming: Scaling up the Real World, pages 359?380. John Wiley & Sons, Inc., New York, 2004. [8] P. Fearnhead and Z. Liu. On-line inference for multiple changepoint problems. Journal of the Royal Statistical Society B, 69:589?605, 2007. [9] R.S. Sutton, D. Precup, and S.P. Singh. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1-2):181?211, 1999. [10] T. Lozano-Perez, M.T. Mason, and R.H. Taylor. Automatic synthesis of fine-motion strategies for robots. The International Journal of Robotics Research, 3(1):3?24, 1984. [11] R.R. Burridge, A.A. Rizzi, and D.E. Koditschek. Sequential composition of dynamically dextrous robot behaviors. International Journal of Robotics Research, 18(6):534?555, 1999. [12] S. Boccaletti, A. Farini, E.J. Kostelich, and F.T. Arecchi. Adaptive targeting of chaos. Physical Review E, 55(5):4845?4848, 1997. [13] G.D. Konidaris and S. Osentoski. Value function approximation in reinforcement learning using the Fourier basis. Technical Report UM-CS-2008-19, Department of Computer Science, University of Massachusetts Amherst, June 2008. [14] N. Mehta, S. Ray, P. Tadepalli, and T. Dietterich. Automatic discovery and transfer of MAXQ hierarchies. In Proceedings of the Twenty Fifth International Conference on Machine Learning, pages 648?655, 2008. [15] J. Mugan and B. Kuipers. Autonomously learning an action hierarchy using a learned qualitative state representation. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, 2009. [16] G. Neumann, W. Maass, and J. Peters. Learning complex motions by sequencing simpler motion templates. In Proceedings of the 26th International Conference on Machine Learning, 2009. [17] R. Tedrake. LQR-Trees: Feedback motion planning on sparse randomized trees. In Proceedings of Robotics: Science and Systems, pages 18?24, 2009. [18] X. Xuan and K. Murphy. Modeling changing dependency structure in multivariate time series. In Proceedings of the Twenty-Fourth International Conference on Machine Learning, 2007. [19] E.B. Fox, E.B. Sudderth, M.I. Jordan, and A.S. Willsky. Nonparametric Bayesian learning of switching linear dynamical systems. In Advances in Neural Information Processing Systems 21, 2008. [20] O.C. Jenkins and M. Matari?c. Performance-derived behavior vocabularies: data-driven acquisition of skills from motion. International Journal of Humanoid Robotics, 1(2):237?288, 2004. [21] D.H. Grollman and O.C. Jenkins. Incremental learning of subtasks from unsegmented demonstration. In International Conference on Intelligent Robots and Systems, 2010. [22] J. Butterfield, S. Osentoski, G. Jay, and O.C. Jenkins. Learning from demonstration using a multi-valued function regressor for time-series data. In Proceedings of the Tenth IEEE-RAS International Conference on Humanoid Robots, 2010. [23] J. Peters and S. Schaal. Natural actor-critic. Neurocomputing, 71(7-9):1180?1190, 2008. [24] P. Abbeel and A.Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the 21st International Conference on Machine Learning, 2004. [25] S. Chernova and M. Veloso. Confidence-based policy learning from demonstration using Gaussian mixture models. In Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, 2007. [26] G.D. Konidaris and A.G. Barto. Building portable options: Skill transfer in reinforcement learning. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence, 2007. 9
3903 |@word trial:1 middle:1 polynomial:1 nd:1 tadepalli:1 open:3 termination:4 mehta:2 incurs:2 thereby:2 solid:1 accommodate:1 initial:3 liu:4 series:2 uma:2 selecting:1 lqr:1 ours:1 existing:2 current:2 si:1 assigning:1 must:3 john:1 numerical:1 motor:1 designed:1 resampling:1 intelligence:4 fewer:2 selected:2 isotropic:2 beginning:2 core:1 fa9550:1 ptm:1 colored:1 filtered:1 detecting:1 provides:4 location:2 simpler:5 five:2 along:4 constructed:1 corridor:3 grupen:3 consists:2 prove:1 qualitative:1 fitting:1 dan:1 ray:1 introduce:2 apprenticeship:1 acquired:2 huber:1 ra:1 expected:3 behavior:2 roughly:1 planning:2 growing:1 aliasing:1 multi:1 discounted:1 little:1 kuiper:2 considering:1 becomes:2 begin:1 provided:1 moreover:1 panel:5 tic:1 argall:2 temporal:1 every:2 sensibly:1 classifier:1 um:1 control:9 grant:1 maneuver:1 segmenting:5 before:2 io:3 switching:2 sutton:2 path:1 ap:5 merge:8 plus:1 drag:1 dynamically:2 challenging:2 stratified:1 practice:1 chaotic:1 bootstrap:2 procedure:2 powell:1 ri2:2 java:1 pre:6 integrating:1 confidence:2 anticipation:1 suggest:1 targeting:2 selection:5 close:1 operator:1 storage:2 context:1 applying:1 www:1 equivalent:1 map:5 demonstrated:4 yt:2 maximizing:1 phil:1 primitive:7 straightforward:1 starting:4 independently:1 survey:1 immediately:3 factored:2 estimator:1 wunsch:1 stability:1 handle:1 autonomous:4 updated:1 target:13 trigger:2 pt:6 controlling:2 engage:1 hierarchy:4 programming:1 us:4 osentoski:2 expensive:1 walking:1 continues:1 updating:1 rizzi:1 observed:4 enters:1 region:4 episode:7 autonomously:2 principled:1 environment:2 broken:2 complexity:3 reward:11 dynamic:6 terminating:1 singh:1 ubot:10 segment:35 creation:2 creates:5 completely:1 basis:10 misalignment:1 necessitates:1 easily:1 resolved:1 joint:4 represented:7 distinct:1 fast:1 monte:1 artificial:4 refined:3 whose:2 heuristic:1 solve:1 valued:1 statistic:7 noisy:1 final:3 online:5 sequence:4 advantage:3 propose:1 macro:1 inserting:1 aligned:2 relevant:2 realization:1 loop:1 combining:1 poorly:1 description:3 intuitive:1 neumann:2 produce:1 xuan:2 incremental:2 executing:2 object:2 help:1 derive:1 andrew:2 qt:1 school:1 implemented:1 c:3 merged:7 correct:1 filter:4 human:9 rosenstein:1 require:4 abbeel:2 wall:3 repertoire:1 sarsa:1 pl:1 hold:1 considered:1 great:1 presumably:1 viterbi:3 mapping:1 pjm:4 changepoint:17 major:1 driving:1 vary:1 individually:3 create:2 successfully:3 koditschek:1 mit:1 fearnhead:4 gaussian:5 always:1 rather:7 reaching:2 avoid:1 mobile:3 barto:13 monolithically:3 office:1 conjunction:1 derived:1 june:1 schaal:2 sequencing:3 likelihood:2 contrast:1 centroid:1 detect:1 inference:1 browning:1 abstraction:28 typically:1 entire:5 bt:1 initially:2 hidden:3 overall:2 issue:1 development:1 platform:1 integration:1 orange:2 special:1 marginal:1 field:1 construct:3 once:2 ng:2 identical:1 pinball:9 report:1 simplify:1 intelligent:1 few:2 opening:1 employ:2 primarily:1 gamma:2 simultaneously:1 neurocomputing:1 individual:1 murphy:2 detection:10 interest:1 kinematic:2 alignment:1 mixture:2 chernova:2 perez:1 chain:20 partial:1 experience:3 fox:1 tree:24 continuing:1 taylor:1 pmf:1 desired:1 initialized:1 circle:1 instance:3 modeling:1 obstacle:2 cover:1 assignment:1 successful:1 too:6 stored:1 dependency:1 chooses:2 adaptively:2 combined:1 st:2 international:13 amherst:2 randomized:1 retain:2 off:1 regressor:1 together:2 synthesis:1 precup:1 again:1 containing:1 choose:1 slowly:1 worse:1 creating:2 expert:1 return:5 coefficient:1 inc:1 caused:1 performed:3 try:1 reached:2 start:5 red:1 option:34 bayes:1 competitive:1 square:3 purple:5 air:1 variance:4 characteristic:1 efficiently:2 gathered:1 yellow:2 identification:1 bayesian:2 carlo:1 trajectory:55 multiplying:1 drive:3 converged:1 reach:8 whenever:1 centering:1 konidaris:9 acquisition:5 sensorimotor:1 naturally:1 massachusetts:2 knowledge:1 car:1 torso:5 segmentation:14 actually:1 reflecting:2 back:2 centric:1 nasa:1 higher:3 xie:1 supervised:1 replayed:1 formulation:1 execute:2 evaluated:2 though:1 furthermore:1 just:1 until:2 hand:1 unsegmented:1 marker:3 incrementally:4 morning:1 logistic:1 perhaps:1 scientific:1 grows:1 manipulator:3 mdp:1 building:1 detached:1 dietterich:1 requiring:1 true:1 lozano:1 hence:2 assigned:2 entering:1 spatially:1 laboratory:2 iteratively:1 maass:1 illustrated:1 deal:1 during:5 chaining:21 pdf:1 complete:2 demonstrate:1 occuring:1 ridge:1 performs:2 motion:11 invaluable:1 matari:2 roderic:1 image:1 meaning:1 chaos:1 recently:1 rl:12 physical:1 endpoint:3 occurred:1 composition:1 cambridge:1 rd:2 automatic:2 particle:6 stochasticity:1 had:1 gdk:2 robot:8 actor:2 etc:1 add:2 multivariate:4 own:6 recent:3 driven:1 discard:1 store:3 initiation:10 approximators:1 accomplished:1 george:2 fortunately:1 determine:2 dashed:1 semi:1 branch:1 full:1 multiple:9 rj:1 infer:1 july:1 segmented:7 technical:1 faster:2 match:1 veloso:2 offer:1 controlled:1 regression:6 maintenance:1 controller:3 vision:1 metric:1 represent:7 robotics:10 background:1 whereas:1 addition:2 fellowship:1 fine:1 sudderth:1 leaving:1 source:1 allocated:1 crucial:1 unlike:1 hz:1 virtually:1 jordan:1 door:7 split:1 enough:1 mahadevan:1 fit:7 reduce:1 idea:2 consumed:1 intensive:1 bounce:1 whether:2 six:1 reuse:1 stereo:1 peter:3 proceed:1 cause:1 york:1 action:16 generally:2 collision:1 covered:1 detailed:1 useful:2 amount:1 repeating:2 nonparametric:1 statist:1 induces:1 http:1 demonstrator:1 exist:4 per:5 discrete:4 group:1 key:2 four:1 thereafter:2 salient:1 nevertheless:1 achieving:1 changing:1 pj:1 tenth:1 backward:1 timestep:3 graph:1 merely:2 sum:2 inverse:2 angle:2 fourth:1 place:2 family:1 reasonable:3 planner:3 decision:1 scaling:3 followed:1 quadratic:1 occur:3 kuindersma:2 ri:2 flat:1 aspect:1 fourier:6 speed:1 performing:5 attempting:1 transferred:1 department:2 combination:1 poor:1 ball:3 conjugate:1 son:1 appealing:1 making:1 modification:1 restricted:1 taken:1 computationally:1 equation:5 previously:1 turn:3 eventually:1 fail:2 mechanism:2 know:2 end:2 available:1 changepoints:7 gaussians:2 jenkins:4 observe:2 hierarchical:5 v2:1 appropriate:12 mugan:2 encounter:1 shortly:2 original:1 thomas:1 assumes:1 remaining:2 ensure:2 include:2 giving:2 k1:1 build:4 society:1 unchanged:1 implied:1 move:1 sweep:1 added:2 question:1 occurs:1 fa:1 strategy:1 rt:4 usual:1 diagonal:1 distance:2 separate:1 thank:1 majority:1 parametrized:1 portable:1 toward:1 dextrous:1 willsky:1 length:6 code:1 modeled:2 demonstration:28 acquire:2 difficult:2 executed:4 unfortunately:1 potentially:1 trace:1 subpolicies:1 implementation:1 policy:43 twenty:3 perform:8 allowing:1 markov:1 discarded:1 enabling:1 incorrectly:1 extended:2 incorporated:1 dirk:1 sharp:1 subtasks:1 inferred:2 introduced:2 pair:11 required:5 merges:1 learned:13 maxq:1 discontinuity:1 able:10 alongside:1 usually:2 perception:1 scott:2 dynamical:2 max:1 unsuccessful:1 royal:1 event:7 suitable:2 natural:4 overlap:1 force:1 indicator:1 arm:1 representing:1 improve:1 brief:1 library:2 mdps:2 created:5 coupled:1 prior:7 geometric:1 discovery:6 acknowledgement:1 review:1 fully:2 multiagent:1 approximator:3 downloaded:1 humanoid:2 agent:35 sufficient:8 editor:1 storing:1 critic:2 balancing:1 elsewhere:2 placed:1 repeat:1 supported:2 infeasible:1 heading:1 bias:6 allow:1 template:2 brenna:1 fifth:1 sparse:1 benefit:1 feedback:3 boundary:1 curve:4 transition:4 world:1 vocabulary:1 forward:1 collection:1 reinforcement:16 adaptive:2 far:2 employing:1 approximate:1 skill:118 obtains:3 sequentially:5 doorway:1 tuples:1 xi:5 decrementing:1 continuous:12 scratch:3 learn:12 terminate:1 transfer:2 elastic:1 obtaining:1 interact:1 excellent:1 complex:9 necessarily:1 constructing:3 domain:21 dense:1 whole:1 noise:4 incrementing:1 fair:1 slow:1 aid:1 wiley:1 position:5 wish:2 exceeding:1 candidate:1 replay:3 perceptual:3 third:1 rtp:1 jay:1 learns:4 down:1 specific:4 discarding:1 xt:2 list:1 mason:1 exists:1 restricting:1 merging:6 adding:1 sequential:1 mirror:1 execution:2 perceptually:1 conditioned:1 push:4 hole:2 easier:1 depicted:1 simply:3 likely:1 twentieth:1 forming:1 contained:2 tedrake:2 u2:2 acquiring:3 cdf:1 ma:1 succeed:1 goal:10 viewed:1 consequently:1 towards:1 feasible:2 change:4 cst:16 called:1 indicating:1 select:5 latter:1 evaluate:1 avoiding:1
3,206
3,904
Guaranteed Rank Minimization via Singular Value Projection Prateek Jain Microsoft Research Bangalore Bangalore, India [email protected] Raghu Meka UT Austin Dept. of Computer Sciences Austin, TX, USA [email protected] Inderjit Dhillon UT Austin Dept. of Computer Sciences Austin, TX, USA [email protected] Abstract Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization under affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy a restricted isometry property (RIP). Our method guarantees geometric convergence rate even in the presence of noise and requires strictly weaker assumptions on the RIP constants than the existing methods. We also introduce a Newton-step for our SVP framework to speed-up the convergence with substantial empirical gains. Next, we address a practically important application of ARMP - the problem of lowrank matrix completion, for which the defining affine constraints do not directly obey RIP, hence the guarantees of SVP do not hold. However, we provide partial progress towards a proof of exact recovery for our algorithm by showing a more restricted isometry property and observe empirically that our algorithm recovers low-rank incoherent matrices from an almost optimal number of uniformly sampled entries. We also demonstrate empirically that our algorithms outperform existing methods, such as those of [5, 18, 14], for ARMP and the matrix completion problem by an order of magnitude and are also more robust to noise and sampling schemes. In particular, results show that our SVP-Newton method is significantly robust to noise and performs impressively on a more realistic power-law sampling scheme for the matrix completion problem. 1 Introduction In this paper we study the general affine rank minimization problem (ARMP), min rank(X) s.t A(X) = b, where A is an affine transformation from R m?n X ? Rm?n , b ? Rd , (ARMP) to R . d The affine rank minimization problem above is of considerable practical interest and many important machine learning problems such as matrix completion, low-dimensional metric embedding, lowrank kernel learning can be viewed as instances of the above problem. Unfortunately, ARMP is NP-hard in general and is also NP-hard to approximate ([22]). Until recently, most known methods for ARMP were heuristic in nature with few known rigorous guarantees. In a recent breakthrough, Recht et al. [24] gave the first nontrivial results for the 1 problem obtaining guaranteed rank minimization for affine transformations A that satisfy a restricted isometry property (RIP). Define the isometry constant of A, ?k to be the smallest number such that for all X ? Rm?n of rank at most k, (1 ? ?k )?X?2F ? ?A(X)?22 ? (1 + ?k )?X?2F . (1) The above RIP condition is a direct generalization of the RIP condition used in the compressive sensing context. Moreover, RIP holds for many important practical applications of ARMP such as image compression, linear time-invariant systems. In particular, Recht et al. show that for most natural families of random measurements, RIP is satisfied even for only O(nk log n) measurements. Also, Recht et al. show that for ARMP with isometry constant ?5k < 1/10, the minimum rank solution can be recovered by the minimum trace-norm solution. In this paper we propose a simple and efficient algorithm SVP (Singular Value Projection) based on the projected gradient algorithm. We present a simple analysis showing that SVP recovers the minimum rank solution for noisy affine constraints that satisfy RIP and prove the following guarantees. (Independent of our work, Goldfarb and Ma [12] proposed an algorithm similar to SVP. However, their analysis?and formulation is different from ours. They also require stronger isometry assumptions, ?3k < 1/ 30, than our analysis.) Theorem 1.1 Suppose the isometry constant of A satisfies ?2k < 1/3 and let b = A(X ? ) for a rank-k matrix X ? . Then, SVP (Algorithm 1) with step-size ?t = 1/(1 + ?2k ) converges to X ? . 2 Furthermore, SVP outputs a matrix ? X of rank at most k such ? that ?A(X) ? b?2 ? ? and ?X ? X ? ?2F ? ?/(1 ? ?2k ) in at most 1 log((1??2k )/2?2k ) log ?b?2 2? iterations. Theorem 1.2 (Main) Suppose the isometry constant of A satisfies ?2k < 1/3 and let b = A(X ? )+e for a rank k matrix X ? and an error vector e ? Rd . Then, SVP with step-size ?t = 1/(1 + ?2k ) 2 ? 2 2 outputs a matrix X of rank? at most k such that ?A(X) ? ? b?2 ? C?e? + ? and ?X ? X ?F ? C?e?2 +? 1??2k , ? ? 0, in at most 1 log(1/D) ?b? log 2(C?e? iterations for universal constants C, D. 2 +?) 2 As our SVP algorithm is based on projected gradient descent, it behaves as a first order methods and may require a relatively large number of iterations to achieve high accuracy, even after identifying the correct row and column subspaces. To this end, we introduce a Newton-type step in our framework (SVP-Newton) rather than using a simple gradient-descent step. Guarantees similar to Theorems 1.1, 1.2 follow easily for SVP-Newton using the proofs for SVP. In practice, SVP-Newton performs better than SVP in terms of accuracy and number of iterations. We next consider an important application of ARMP: the low-rank matrix completion problem (MCP)? given a small number of entries from an unknown low-rank matrix, the task is to complete the missing entries. Note that RIP does not hold directly for this problem. Recently, Candes and Recht [6], Candes and Tao [7] and Keshavan et al. [14] gave the first theoretical guarantees for the problem obtaining exact recovery from an almost optimal number of uniformly sampled entries. While RIP does not hold for MCP, we show that a similar property holds for incoherent matrices [6]. Given our refined RIP and a hypothesis bounding the incoherence of the iterates arising in SVP, an analysis similar to that of Theorem 1.1 immediately implies that SVP optimally solves MCP. We provide strong empirical evidence for our hypothesis and show that that both of our algorithms recover a low-rank matrix from an almost optimal number of uniformly sampled entries. In summary, our main contributions are: ? Motivated by [11], we propose a projected gradient based algorithm, SVP, for ARMP and show that our method recovers the optimal rank solution when the affine constraints satisfy RIP. To the best of our knowledge, our isometry constant requirements are ? least stringent: we only require ?2k < 1/3 as opposed to ?5k < 1/10 by Recht et al., ?3k < 1/4 3 by Lee and Bresler [18] and ?4k < 0.04 by Lee and Bresler [17]. ? We introduce a Newton-type step in the SVP method which is useful if high precision is critically. SVP-Newton has similar guarantees to that of SVP, is more stable and has better empirical performance in terms of accuracy. For instance, on the Movie-lens dataset [1] and rank k = 3, SVP-Newton achieves an RMSE of 0.89, while SVT method [5] achieves an RMSE of 0.98. ? As observed in [23], most trace-norm based methods perform poorly for matrix completion when entries are sampled from more realistic power-law distributions. Our method SVP-Newton is relatively robust to sampling techniques and performs significantly better than the methods of [5, 14, 23] even for power-law distributed samples. 2 ? We show that the affine constraints in the low-rank matrix completion problem satisfy a weaker restricted isometry property and as supported by empirical evidence, conjecture that SVP (as well as SVP-Newton) recovers the underlying matrix from an almost optimal number of uniformly random samples. ? We evaluate our method on a variety of synthetic and real-world datasets and show that our methods consistently outperform, both in accuracy and time, various existing methods [5, 14]. 2 Method In this section, we first introduce our Singular Value Projection (SVP) algorithm for ARMP and present a proof of its optimality for affine constraints satisfying RIP (1). We then specialize our algorithm for the problem of matrix completion and prove a more restricted isometry property for the same. Finally, we introduce a Newton-type step in our SVP algorithm and prove its convergence. 2.1 Singular Value Decomposition (SVP) Consider the following more robust formulation of ARMP (RARMP), 1 min ?(X) = ?A(X) ? b?22 s.t X ? C(k) = {X : rank(X) ? k}. (RARMP) X 2 The hardness of the above problem mainly comes from the non-convexity of the set of low-rank matrices C(k). However, the Euclidean projection onto C(k) can be computed efficiently using singular value decomposition (SVD). Our algorithm uses this observation along with the projected gradient method for efficiently minimizing the objective function specified in (RARMP). Let Pk : Rm?n ? Rm?n denote the orthogonal projection on to the set C(k). That is, Pk (X) = argminY {?Y ? X?F : Y ? C(k)}. It is well known that Pk (X) can be computed efficiently by computing the top k singular values and vectors of X. In SVP, a candidate solution to ARMP is computed iteratively by starting from the all-zero matrix and adapting the classical projected gradient descent update as follows (note that ??(X) = AT (A(X) ? b)): ( ) ( ) X t+1 ? Pk X t ? ?t ??(X t ) = Pk X t ? ?t AT (A(X t ) ? b) . (1) t Figure 1 presents SVP in more detail. Note that the iterates X are always low-rank, facilitating faster computation of the SVD. See Section 3 for a more detailed discussion of computational issues. Algorithm 1 Singular Value Projection (SVP) Algorithm Require: A, b, tolerance ?, ?t for t = 0, 1, 2, . . . 1: Initialize: X 0 = 0 and t = 0 2: repeat 3: Y t+1 ? X t ? ?t AT (A(X t ) ? b) 4: Compute top k singular vectors of Y t+1 : Uk , ?k , Vk 5: X t+1 ? Uk ?k VkT 6: t?t+1 7: until ?A(X t+1 ) ? b?22 ? ? Analysis for Constraints Satisfying RIP 2 Theorem 1.1 shows that SVP converges to an ?-approximate solution of RARMP in O(log ?b? ? ) steps. Theorem 1.2 shows a similar result for the noisy case. The theorems follow from the following lemma that bounds the objective function after each iteration. Lemma 2.1 Let X ? be an optimal solution of (RARMP) and let X t be the iterate obtained by SVP ?2k at t-th iteration. Then, ?(X t+1 ) ? ?(X ? ) + (1?? ?A(X ? ? X t )?22 , where ?2k is the rank 2k 2k ) isometry constant of A. The lemma follows from elementary linear algebra, optimality of SVD (Eckart-Young theorem) and two simple applications of RIP. We refer to the supplementary material (Appendix A) for a detailed proof. We now prove Theorem 1.1. Theorem 1.2 can also be proved similarly; see supplementary material (Appendix A) for a detailed proof. Proof of Theorem 1.1 Using Lemma 2.1 and the fact that ?(X ? ) = 0, it follows that ?2k 2?2k ?(X t+1 ) ? ?A(X ? ? X t )?22 = ?(X t ). (1 ? ?2k ) (1 ? ?2k ) 3 2?2k Also, note that for ?2k < 1/3, (1?? < 1. Hence, ?(X ? ) ? ? where ? = 2k ) ? ? 0 ?(X ) 1 . Further, using RIP for the rank at most 2k matrix X ? ? X ? we log((1??2k )/2?2k ) log ? get: ?X ? ? X ? ? ? ?(X ? )/(1 ? ?2k ) ? ??/(1 ? ?2k ). Now, the SVP? algorithm is initialized using 2 ?b?2 1 X 0 = 0, i.e., ?(X 0 ) = ?b? . 2 . Hence, ? = log((1??2k )/2?2k ) log 2? 2.2 Matrix Completion We first describe the low-rank matrix completion problem formally. For ? ? [m] ? [n], let P? : Rm?n ? Rm?n denote the projection onto the index set ?. That is, (P? (X))ij = Xij for (i, j) ? ? and (P? (X))ij = 0 otherwise. Then, the low-rank matrix completion problem (MCP) can be formulated as follows, min rank(X) s.t P? (X) = P? (X ? ), X ? Rm?n . (MCP) X Observe that MCP is a special case of ARMP, so we can apply SVP for matrix completion. We use step-size ?t = 1/(1 + ?)p, where p is the density of sampled entries and ? is a parameter which we will explain later in this section. Using the given step-size and update (1), we get the following update for matrix-completion: ( ) 1 (P? (X t ) ? P? (X ? )) . X t+1 ? Pk X t ? (2) (1 + ?)p Although matrix completion is a special case of ARMP, the affine constraints that define MCP, P? , do not satisfy RIP in general. Thus Theorems 1.1, 1.2 above and the results of Recht et al. [24] do not directly apply to MCP. However, we show that the matrix completion affine constraints satisfy RIP for low-rank incoherent matrices. Definition 2.1 (Incoherence) A matrix X? ? Rm?n with singular value decomposition X = ? ? ? U ?V T is ?-incoherent if maxi,j |Uij | ? ?m , maxi,j |Vij | ? ?n . The above notion of incoherence is similar to that introduced by Candes and Recht [6] and also used by [7, 14]. Intuitively, high incoherence (i.e., ? is small) implies that the non-zero entries of X are not concentrated in a small number of entries. Hence, a random sampling of the matrix should provide enough global information to satisfy RIP. Using the above definition, we prove the following refined restricted isometry property. Theorem 2.2 There exists a constant C ? 0 such that the following holds for all 0 < ? < 1, ? ? 1, n ? m ? 3: For ? ? [m] ? [n] chosen according to the Bernoulli model with density p ? C?2 k 2 log n/? 2 m, with probability at least 1?exp(?n log n), the following restricted isometry property holds for all ?-incoherent matrices X of rank at most k: (1 ? ?)p ?X?2F ? ?P? (X)?2F ? (1 + ?)p ?X?2F . (3) Roughly, our proof combines a Chernoff bound estimate for ?P? (X)?2F with a union bound over low-rank incoherent matrices. A proof sketch is presented in Section 2.2.1. Given the above refined RIP, if the iterates arising in SVP are shown to be incoherent, the arguments of Theorem 1.1 can be used to show that SVP achieves exact recovery for low-rank incoherent matrices from uniformly sampled entries. As supported by empirical evidence, we hypothesize that the iterates X t arising in SVP remain incoherent when the underlying matrix X ? is incoherent. ? t Figure 1 (d) plots the maximum incoherence maxt ?(X t ) = n maxt,i,j |Uij |, where U t are the t left singular vectors of the intermediate iterates X computed by SVP. The figure clearly shows that the incoherence ?(X t ) of the iterates is bounded by a constant independent of the matrix size n and density p throughout the execution of SVP. Figure 2 (c) plots the threshold sampling density p beyond which matrix completion for randomly generated matrices is solved exactly by SVP for fixed k and varying matrix sizes n. Note that the density threshold matches the optimal informationtheoretic bound [14] of ?(k log n/n). Motivated by Theorem 2.2 and supported by empirical evidence (Figures 2 (c), (d)) we hypothesize that SVP achieves exact recovery from an almost optimal number of samples for incoherent matrices. Conjecture 2.3 Fix ?, k and ? ? 1/3. Then, there exists a constant C such that for a ?incoherent matrix X ? of rank at most k and ? sampled from the Bernoulli model with density p = ??,k ((log n)/m), SVP with step-size ?t = 1/(1 + ?)p converges to X ? with high probability. ? 2 Moreover, (? SVP ( 1 )?)outputs a matrix X of rank at most k such that ?P? (X) ? P? (X )?F ? ? after O?,k log ? iterations. 4 2.2.1 RIP for Matrix Completion on Incoherent Matrices We now prove the restricted isometry property of Theorem 2.2 for the affine constraints that result from the projection operator P? . To prove Theorem 2.2 we first show the theorem for a discrete collection of matrices using Chernoff type large-deviation bounds and use standard quantization arguments to generalize to the continuous case. We first introduce some notation and provide useful lemmas for our main proof1 . First, we introduce the notion of ?-regularity. Definition 2.2 A matrix X ? Rm?n is ?-regular if maxi,j |Xij | ? ?? mn ? ?X?F . Lemma 2.4 below relates the notion of regularity to incoherence and Lemma 2.5 proves (3) for a fixed regular matrix when the samples ? are selected independently. ? Lemma 2.4 Let X ? Rm?n be a ?-incoherent matrix of rank at most k. Then X is ? k-regular. Lemma 2.5 Fix a ?-regular X ? Rm?n and 0 < ? < 1. Then, for ? ? [m] ? [n] chosen according to the Bernoulli model, with each pair (i, j) ? ? chosen independently with probability p, ( 2 ) [ ] ? pmn Pr ?P? (X)?2F ? p?X?2F ? ?p?X?2F ? 2 exp ? . 3 ?2 ? While the above lemma shows Equation (3) for a fixed rank k, ?-incoherent X (i.e., (? k)-regular X using Lemma 2.4), we need to show Equation (3) for all such rank k incoherent matrices. To handle this problem, we discretize the space of low-rank incoherent matrices so as to be able to use the above lemma and a union bound. We now show the existence of a small set of matrices S(?, ?) ? Rm?n such that every low-rank ?-incoherent matrix is close to an appropriately regular matrix from the set S(?, ?). Lemma 2.6 For all 0 < ? < 1/2, ? ? 1, m, n ? 3 and k ? 1, there exists a set S(?, ?) ? Rm?n m?n with |S(?, ?)| ? (mnk/?)3 (m+n)k such that the following holds. For any ?-incoherent ? X?R of rank k with ?X?2 = 1, there exists Y ? S(?, ?) s.t. ?Y ? X?F < ? and Y is (4? k)-regular. We now prove Theorem 2.2 by combining Lemmas 2.5, 2.6 and applying a union bound. We present a sketch of the proof but defer the details to the supplementary material (Appendix B). ? Proof Sketch of Theorem 2.2 Let S ? (?, ?) = {Y : Y ? S(?, ?), Y is 4? k-regular}, where S(?, ?) is as in Lemma 2.6 for ? = ?/9mnk. Let m ? n. Then, by Lemma 2.5 and union bound, for any Y ? S ? (?, ?), ( 2 ) ( 2 ) [ ] ?? pmn ?? pmn 2 2 2 3(m+n)k Pr ?P? (Y )?F ? p?Y ?F ? ?p?Y ?F ? 2(mnk/?) exp ? exp(C1 nk log n)?exp , 16?2 k 16?2 k where C1 ? 0 is a constant independent of m, n, k. Thus, if p > C?2 k 2 log n/? 2 m, where C = 16(C1 + 1), with probability at least 1 ? exp(?n log n), the following holds ?Y ? S ? (?, ?), |?P? (Y )?2F ? p?Y ?2F | ? ?p?Y ?2F . (4) As the statement of the theorem is invariant under scaling, it is enough to show the statement for all ?-incoherent matrices X of rank at most k and ?X?2 = 1. Fix such a X and suppose that (4) holds. Now, by Lemma 2.6 there exists Y ? S ? (?, ?) such that ?Y ? X?F ? ?. Moreover, ?Y ?2F ? (?X?F + ?)2 ? ?X?2F + 2??X?F + ?2 ? ?X?2F + 3?k. Proceeding similarly, we can show that |?X?2F ? ?Y ?2F | ? 3?k, |?P? (Y )?2F ? ?P? (X)?2F | ? 3?k. (5) Combining inequalities (4), (5) above, with probability at least 1 ? exp(?n log n) we have, |?P? (X)?2F ? p?X?2F | ? |?P? (X)?2F ? ?P? (Y )?2F | + p |?X?2F ? ?Y ?2F | + |?P? (Y )?2F ? p?Y ?2F | ? 2?p?X?2F . The theorem follows using the above inequality. 2.3 SVP-Newton In this section we introduce a Newton-type step in our SVP method to speed up its convergence. Recall that each iteration of SVP (Equation (1)) takes a step along the gradient of the objective function and then projects the iterate to the set of low rank matrices using SVD. Now, the top k singular vectors (Uk , Vk ) of Y t+1 = X t ??t AT (A(X t )?b) determine the range-space and columnspace of the next iterate in SVP. Then, ?k is given by ?k = Diag(UkT (X t ??t AT (A(X t )?b))Vk ). 1 Detailed proofs of all the lemmas in this section are provided in Appendix B of the supplementary material. 5 Hence, ?k can be seen as a product of gradient-descent step for a quadratic objective function, i.e., ?k = argminS ?(Uk SVkT ). This leads us to the following variant of SVP we call SVP-Newton:2 Compute top k-singular vectors Uk , Vk of Y t+1 = X t ? ?t AT (A(X t ) ? b) X t+1 = Uk ?k Vk , ?k = argmin ?(Uk SVkT ) = argmin ?A(Uk ?k VkT ) ? b?2 . S S Note that as A is an affine transformation, ?k can be computed by solving a least squares problem on k ?k variables. Also, for a single iteration, given the same starting point, SVP-Newton decreases the objective function more than SVP. This observation along with straightforward modifications of the proofs of Theorems 1.1, 1.2 show that similar guarantees hold for SVP-Newton as well3 . Note that the least squares problem for computing ?k has k 2 variables. This makes SVP-Newton computationally expensive for problems with large rank, particularly for situations with a large number of constraints as is the case for matrix completion. To overcome this issue, we also consider the alternative where we restrict ?k to be a diagonal matrix, leading to the update ?k = argmin ?A(Uk SVkT ) ? b?2 (6) S,s.t.,Sij =0 for i?=j We call the above method SVP-NewtonD (for SVP-Newton Diagonal). As for SVP-Newton, guarantees similar to SVP follow for SVP-NewtonD by observing that for each iteration, SVP-NewtonD decreases the objective function more than SVP. 3 Related Work and Computational Issues The general rank minimization problem with affine constraints is NP-hard and is also NP-hard to approximate [22]. Most methods for ARMP either relax the rank constraint to a convex function such as the trace-norm [8], [9], or assume a factorization and optimize the resulting non-convex problem by alternating minimization [4, 3, 15]. The results of Recht ? et al. [24] were later extended to noisy measurements and isometry constants up to ?3k < 1/4 3 by Fazel et al. [10] and Lee and Bresler [18]. However, even the best existing optimization algorithms for the trace-norm relaxation are relatively inefficient in practice. Recently, Lee and Bresler [17] proposed an algorithm (ADMiRA) motivated by the orthogonal matching pursuit line of work in compressed sensing and show that for affine constraints with isometry constant ?4k ? 0.04, their algorithm recovers the optimal solution. However, their method is not very efficient for large datasets and when the rank of the optimal solution is relatively large. For the matrix-completion problem until the recent works of [6], [7] and [14], there were few methods with rigorous guarantees. The alternating least squares minimization heuristic and its variants [3, 15] perform the best in practice, but are notoriously hard to analyze. Candes and Recht [6], Candes and Tao [7] show that if X ? is ?-incoherent and the known entries are sampled uniformly at random with |?| ? C(?) k 2 n log2 n, finding the minimum trace-norm solution recovers the minimum rank solution. Keshavan et.al obtained similar results independently for exact recovery from uniformly sampled ? with |?| ? C(?, k) n log n. Minimizing the trace-norm of a matrix subject to affine constraints can be cast as a semi-definite program (SDP). However, algorithms for semi-definite programming, as used by most methods for minimizing trace-norm, are prohibitively expensive even for moderately large datasets. Recently, a variety of methods based mostly on iterative soft-thresholding have been proposed to solve the trace-norm minimization problem more efficiently. For instance, Cai et al. [5] proposed a Singular Value Thresholding (SVT) algorithm which is based on Uzawa?s algorithm [2]. A related approach based on linearized Bregman iterations was proposed by Ma et al. [20], Toh and Yun [25], while Ji and Ye [13] use Nesterov?s gradient descent methods for optimizing the trace-norm. While the soft-thresholding based methods for trace-norm minimization are significantly faster than SDP based approaches, they suffer from slow convergence (see Figure 2 (d)). Also, noisy measurements pose considerable computational challenges for trace-norm optimization as the rank of the intermediate iterates can become very large (see Figure 3(b)). 2 We call our method SVP-Newton as the Newton method when applied to a quadratic objective function leads to the exact solution by solving the resulting least squares problem. 3 As a side note, we can show a stronger result for SVP-Newton when applied to the special case of compressed-sensing, i.e., when the matrix X is restricted to be diagonal. Specifically, we can show that under certain assumptions SVP-Newton converges to the optimal solution in O(log k), improving upon the result of Maleki [21]. We give the precise statement of the theorem and proof in the supplementary material. 6 ARMP: MIT Logo ARMP: Random Instances 4 12 Error (Frobenius Norm) SVP SVT 2 10 60 80 100 120 n (Size of Matrix) 140 160 5.5 0.1 k = 10, threshold p k=10, Cklog(n)/n 0.08 8 6 4 p=.05 p=.15 p=.25 p=.35 5 0.06 4.5 0.04 4 2 0 10 40 Incoherence (SVP) SVP Density Threshold SVP SVT 10 ? 10 0 600 800 1000 1200 1400 Number of Constraints 1600 0.02 1000 2000 3000 4000 n (Size of the matrix) 3.5 1000 5000 2000 3000 4000 n (Size of the Matrix) 5000 (a) (b) (c) (d) Figure 1: (a) Time taken by SVP and SVT for random instances of the Affine Rank Minimization Problem (ARMP) with optimal rank k = 5. (b) Reconstruction error for the MIT logo. (c) Empirical estimates of the sampling density threshold required for exact matrix completion by SVP (here C = 1.28). Note that the empirical bounds match the information theoretically optimal bound ?(k log n/n). (d) Maximum incoherence maxt ?(X t ) over the iterates of SVP for varying densities p and sizes n. Note that the incoherence is bounded by a constant, supporting Conjecture 2.3. ?3 2 x 10 3 SVP?NewtonD SVP SVT ALS ADMiRA OPT 1 0 ?1 1000 2000 3000 n (Size of Matrix) SVP?NewtonD SVP SVT ALS ADMiRA OPT 4000 5000 RMSE 3 2 1 10 2 10 200 SVP?NewtonD SVP SVT ALS ADMiRA OPT Number of Iterations 4 Time Taken (secs) 3 1 10 0 0 1000 2000 3000 4000 n (Size of Matrix) 5000 10 2 4 6 8 k (Rank of Matrix) 10 150 SVP?NewtonD SVP SVT 100 50 0 1000 2000 3000 4000 n (Size of Matrix) 5000 (a) (b) (c) (d) Figure 2: (a), (b) Running time (on log scale) and RMSE of various methods for matrix completion problem with sampling density p = .1 and optimal rank k = 2. (c) Running time (on log scale) of various methods for matrix completion with sampling density p = .1 and n = 1000. (d) Number of iterations needed to get RMSE 0.001. For the case of matrix completion, SVP has an important property facilitating fast computation of the main update in equation (2); each iteration of SVP involves computing the singular value decomposition (SVD) of the matrix Y = X t + P? (X t ? X ? ), where X t is a matrix of rank at most k whose SVD is known and P? (X t ? X ? ) is a sparse matrix. Thus, matrix-vector products of the form Y v can be computed in time O((m + n)k + |?|). This facilitates the use of fast SVD computing packages such as PROPACK [16] and ARPACK [19] that only require subroutines for computing matrix-vector products. 4 Experimental Results In this section, we empirically evaluate our methods for the affine rank minimization problem and low-rank matrix completion. For both problems we present empirical results on synthetic as well as real-world datasets. For ARMP we compare our method against the trace-norm based singular value thresholding (SVT) method [5]. Note that although Cai et al. present the SVT algorithm in the context of MCP, it can be easily adapted for ARMP. For MCP we compare against SVT, ADMiRA [17], the OptSpace (OPT) method of Keshavan et al. [14], and regularized alternating least squares minimization (ALS). We use our own implementation of SVT for ARMP and ALS, while for matrix completion we use the code provided by the respective authors for SVT, ADMiRA and OPT. We report results averaged over 20 runs. All the methods are implemented in Matlab and use mex files. 4.1 Affine Rank Minimization We first compare our method against SVT on random instances of ARMP. We generate random matrices X ? Rn?n of different sizes n and fixed rank k = 5. We then generate d = 6kn random affine constraint matrices Ai and compute b = A(X). Figure 1(a) compares the computational time required by SVP and SVT (in log-scale) for achieving a relative error (?A(X) ? b?2 /?b?2 ) of 10?3 , and shows that our method requires many fewer iterations and is significantly faster than SVT. Next we evaluate our method for the problem of matrix reconstruction from random measurements. As in Recht et al. [24], we use the MIT logo as the test image for reconstruction. The MIT logo we use is a 38 ? 73 image and has rank four. For reconstruction, we generate random measurement matrices Ai and measure bi = T r(Ai X). We let both SVP and SVT converge and then compute the reconstruction error for the original image. Figure 1 (b) shows that our method incurs significantly smaller reconstruction error than SVT for the same number of measurements. Matrix Completion: Synthetic Datasets (Uniform Sampling) We now evaluate our method against various matrix completion methods for random low-rank ma7 ALS 0.88 0.87 0.86 0.86 0.87 0.88 SVT 1.06 0.98 0.95 0.93 0.91 0.90 10 2 10 SVP?NewtonD SVP SVT ALS 1 2.5 ICMC ALS SVT SVP SVP NewtonD 2 RMSE SVP 1.15 1.14 1.09 1.08 1.07 1.08 Time Taken (secs) SVP-NewtonD 0.90 0.89 0.89 0.89 0.90 0.91 10 1.5 1 4 10 1000 2000 3000 4000 n (Size of Matrix) 5000 0 2 1 0.5 0 ICMC ALS SVT SVP SVP NewtonD 3 RMSE 3 k 2 3 5 7 10 12 500 1000 1500 n (Size of Matrix) 2000 0 500 1000 1500 n (Size of Matrix) (a) (b) (c) (d) Figure 3: (a): RMSE incurred by various methods for matrix completion with different rank (k) solutions on Movie-Lens Dataset. (b): Time(on log scale) required by various methods for matrix completion with p = .1, k = 2 and 10% Gaussian noise. Note that all the four methods achieve similar RMSE. (c): RMSE incurred by various methods for matrix completion with p = 0.1, k = 10 when the sampling distribution follows Power-law distribution (Chung-Lu-Vu Model). (d): RMSE incurred for the same problem setting as plot (c) but with added Gaussian noise. trices and uniform samples. We generate a random rank k matrix X ? Rn?n and generate random Bernoulli samples with probability p. Figure 2 (a) compares the time required by various methods (in log-scale) to obtain a root mean square error (RMSE) of 10?3 on the sampled entries for fixed k = 2. Clearly, SVP is substantially faster than the other methods. Next, we evaluate our method for increasing k. Figure 2 (b) compares the overall RMSE obtained by various methods. Note that SVP-Newton is significantly more accurate than both SVP and SVT. Figure 2 (c) compares the time required by various methods to obtain a root mean square error (RMSE) of 10?3 on the sampled entries for fixed n = 1000 and increasing k. Note that our algorithms scale well with increasing k and are faster than other methods. Next, we analyze reasons for better performance of our methods. To this end, we plot the number of iterations required by our methods as compared to SVT (Figure 2 (d)). Note that even though each iteration of SVT is almost as expensive as our methods?, our methods converge in significantly fewer iterations. Finally, we study the behavior of our method in presence of noise. For this experiment, we generate random matrices of different size and add approximately 10% Gaussian noise. Figure 2 (c) plots time required by various methods as n increases from 1000 to 5000. Note that SVT is particularly sensitive to noise. One of the reason for this is that due to noise, the rank of the intermediate iterates arising in SVT can be fairly large. Matrix Completion: Synthetic Dataset (Power-law Sampling) We now evaluate our methods against existing matrix-completion methods under more realistic power-law distributed samples. As before, we generate a random rank-k = 10 matrix X ? Rn?n and sample the entries of X using a graph generated using Chung-Lu-Vu model with power-law distributed degrees (see [23]) for details. Figure 3 (c) plots the RMSE obtained by various methods for varying n and fixed sampling density p = 0.1. Note that SVP-NewtonD performs significantly better than SVT as well as SVP. Figure 3 (d) plots the RMSE obtained by various methods when each sampled entry is corrupted with around 1% Gaussian noise. Note that here again SVP-NewtonD performs similar to ALS and is significantly better than the other methods including the ICMC method [23] which is specially designed for power-law sampling but is quite sensitive to noise. Matrix Completion: Movie-Lens Dataset Finally, we evaluate our method on the Movie-Lens dataset [1], which contains 1 million ratings for 3900 movies by 6040 users. Figure 3 (a) shows the RMSE? obtained by each method with varying k. For SVP and SVP-Newton, we fix step size to be ? = 1/p (t), where t is the number of iterations. For SVT, we fix ? = .2p using cross-validation. Since, rank cannot be fixed in SVT, we try various values for the parameter ? to obtain the desired rank solution. Note that SVP-Newton incurs a RMSE of 0.89 for k = 3. In contrast, SVT achieves a RMSE of 0.98 for the same rank. We remark that SVT was able to achieve RMSE up to 0.89 but required rank 17 solution and was significantly slower in convergence because many intermediate iterates had large rank (up to around 150). We attribute the relatively poor performance of SVP and SVT as compared with ALS and SVP-Newton to the fact that the ratings matrix is not sampled uniformly, thus violating the crucial assumption of uniformly distributed samples. Acknowledgements: This research was supported in part by NSF grant CCF-0728879. 8 2000 References [1] Movie lens dataset. Public dataset. URL http://www.grouplens.org/taxonomy/term/14. [2] K. Arrow, L. Hurwicz, and H. Uzawa. Studies in Linear and Nonlinear Programming. Stanford University Press, Stanford, 1958. [3] Robert Bell and Yehuda Koren. Scalable collaborative filtering with jointly derived neighborhood interpolation weights. In ICDM, pages 43?52, 2007. doi: 10.1109/ICDM.2007.90. [4] Matthew Brand. Fast online SVD revisions for lightweight recommender systems. In SIAM International Conference on Data Mining, 2003. [5] Jian-Feng Cai, Emmanuel J. Cand`es, and Zuowei Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956?1982, 2010. [6] Emmanuel J. Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717?772, December 2009. [7] Emmanuel J. Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inform. Theory, 56(5):2053?2080, 2009. [8] M. Fazel, H. Hindi, and S. Boyd. A rank minimization heuristic with application to minimum order system approximation. In American Control Conference, Arlington, Virginia, 2001. [9] M. Fazel, H. Hindi, and S. Boyd. Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. In American Control Conference, 2003. [10] M. Fazel, E. Candes, B. Recht, and P. Parrilo. Compressed sensing and robust recovery of low rank matrices. In Signals, Systems and Computers, 2008 42nd Asilomar Conference on, pages 1043?1047, Oct. 2008. doi: 10.1109/ACSSC.2008.5074571. [11] Rahul Garg and Rohit Khandekar. Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property. In ICML, 2009. [12] Donald Goldfarb and Shiqian Ma. Convergence of fixed point continuation algorithms for matrix rank minimization, 2009. Submitted. [13] Shuiwang Ji and Jieping Ye. An accelerated gradient method for trace norm minimization. In ICML, 2009. [14] Raghunandan H. Keshavan, Sewoong Oh, and Andrea Montanari. Matrix completion from a few entries. In ISIT?09: Proceedings of the 2009 IEEE international conference on Symposium on Information Theory, pages 324?328, Piscataway, NJ, USA, 2009. IEEE Press. ISBN 978-1-4244-4312-3. [15] Yehuda Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In KDD, pages 426?434, 2008. doi: 10.1145/1401890.1401944. [16] R.M. Larsen. Propack: a software for large and sparse SVD calculations. Available online. URL http: //sun.stanford.edu/rmunk/PROPACK/. [17] Kiryung Lee and Yoram Bresler. Admira: Atomic decomposition for minimum rank approximation, 2009. [18] Kiryung Lee and Yoram Bresler. Guaranteed minimum rank approximation from linear observations by nuclear norm minimization with an ellipsoidal constraint, 2009. [19] Richard B. Lehoucq, Danny C. Sorensen, and Chao Yang. ARPACK Users? Guide: Solution of LargeScale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods. SIAM, Philadelphia, 1998. [20] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. To appear, Mathematical Programming Series A, 2010. [21] Arian Maleki. Coherence analysis of iterative thresholding algorithms. CoRR, abs/0904.1193, 2009. [22] Raghu Meka, Prateek Jain, Constantine Caramanis, and Inderjit S. Dhillon. Rank minimization via online learning. In ICML, pages 656?663, 2008. doi: 10.1145/1390156.1390239. [23] Raghu Meka, Prateek Jain, and Inderjit S. Dhillon. Matrix completion from power-law distributed samples. In NIPS, 2009. [24] Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, 2007. To appear in SIAM Review. [25] K.C. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Preprint, 2009. URL http://www.math.nus.edu.sg/?matys/apg.pdf. 9
3904 |@word compression:1 norm:17 stronger:2 nd:1 linearized:1 decomposition:5 incurs:2 contains:1 lightweight:1 series:1 ours:1 existing:5 recovered:1 com:1 toh:2 danny:1 realistic:3 kdd:1 hypothesize:2 plot:7 designed:1 update:5 selected:1 fewer:2 propack:3 iterates:10 math:1 org:1 mathematical:1 along:3 direct:1 become:1 symposium:1 prove:8 specialize:1 combine:1 introduce:8 theoretically:1 hardness:1 behavior:1 cand:3 andrea:1 sdp:2 roughly:1 increasing:3 revision:1 project:1 provided:2 moreover:3 underlying:2 bounded:2 notation:1 prateek:3 argmin:3 substantially:1 kiryung:2 compressive:1 finding:1 transformation:3 sparsification:1 nj:1 guarantee:10 every:1 exactly:1 prohibitively:1 rm:13 uk:9 control:2 grant:1 appear:2 arnoldi:1 before:1 svt:34 meet:1 incoherence:10 interpolation:1 approximately:1 pmn:3 logo:4 garg:1 factorization:2 range:1 bi:1 averaged:1 trice:1 fazel:5 practical:2 vu:2 practice:3 union:4 definite:2 yehuda:2 atomic:1 empirical:9 universal:1 bell:1 significantly:10 adapting:1 projection:9 matching:1 boyd:2 regular:8 donald:1 get:3 onto:2 close:1 cannot:1 operator:1 context:2 applying:1 optimize:1 www:2 missing:1 jieping:1 straightforward:1 starting:2 independently:3 convex:4 rmunk:1 shen:1 recovery:7 identifying:1 immediately:1 nuclear:3 oh:1 embedding:1 handle:1 notion:3 suppose:3 rip:22 exact:8 programming:3 user:2 us:1 hypothesis:2 satisfying:2 expensive:3 particularly:2 observed:1 preprint:1 solved:1 eckart:1 sun:1 decrease:2 substantial:1 benjamin:2 convexity:1 moderately:1 nesterov:1 solving:2 algebra:1 upon:1 easily:2 various:14 tx:2 caramanis:1 jain:3 fast:4 describe:1 doi:4 neighborhood:2 refined:3 whose:1 heuristic:4 supplementary:5 solve:1 quite:1 stanford:3 relax:1 otherwise:1 compressed:3 statistic:1 jointly:1 noisy:4 online:3 eigenvalue:1 isbn:1 cai:3 propose:3 reconstruction:6 maryam:1 product:3 combining:2 poorly:1 achieve:3 frobenius:1 convergence:7 regularity:2 requirement:1 converges:4 completion:38 pose:1 ij:2 lowrank:2 progress:1 solves:1 strong:1 implemented:1 c:2 involves:1 implies:2 come:1 correct:1 attribute:1 stringent:1 material:5 public:1 require:5 fix:5 generalization:1 opt:5 isit:1 elementary:1 admira:7 strictly:1 hold:11 practically:1 around:2 exp:7 matthew:1 achieves:5 smallest:1 grouplens:1 utexas:2 sensitive:2 minimization:22 mit:4 clearly:2 always:1 gaussian:4 rather:1 varying:4 derived:1 vk:5 consistently:1 rank:79 bernoulli:4 mainly:1 contrast:1 rigorous:2 uij:2 subroutine:1 tao:3 issue:3 overall:1 breakthrough:1 initialize:1 special:3 fairly:1 sampling:13 chernoff:2 icml:3 np:4 report:1 bangalore:2 few:3 richard:1 randomly:1 raghunandan:1 microsoft:2 ab:1 interest:1 mining:1 sorensen:1 accurate:1 bregman:2 partial:1 arian:1 respective:1 orthogonal:2 euclidean:2 initialized:1 desired:1 theoretical:1 instance:6 column:1 soft:2 optspace:1 deviation:1 entry:16 uniform:2 virginia:1 optimally:1 kn:1 corrupted:1 proximal:1 synthetic:4 recht:13 density:12 fundamental:1 siam:4 international:2 lee:6 terence:1 again:1 satisfied:1 ukt:1 opposed:1 shiqian:1 american:2 inefficient:1 leading:1 chung:2 parrilo:2 sec:2 satisfy:8 later:2 root:2 try:1 observing:1 analyze:2 recover:1 candes:6 defer:1 rmse:20 contribution:1 collaborative:2 square:8 accuracy:4 efficiently:4 generalize:1 critically:1 lu:2 notoriously:1 submitted:1 explain:1 inform:1 definition:3 against:5 larsen:1 proof:13 recovers:7 gain:1 sampled:13 dataset:7 proved:1 recall:1 knowledge:1 ut:2 violating:1 follow:3 arlington:1 rahul:1 formulation:2 though:1 furthermore:1 until:3 sketch:3 keshavan:4 nonlinear:1 multifaceted:1 usa:3 ye:2 ccf:1 maleki:2 hence:5 alternating:3 dhillon:3 goldfarb:3 iteratively:1 pdf:1 yun:2 complete:1 demonstrate:1 performs:5 image:4 recently:4 argminy:1 behaves:1 empirically:3 ji:2 icmc:3 million:1 measurement:7 refer:1 ai:3 meka:3 rd:2 mathematics:1 similarly:2 had:1 stable:1 add:1 isometry:18 recent:2 own:1 optimizing:1 constantine:1 certain:1 inequality:2 seen:1 minimum:10 zuowei:1 determine:1 converge:2 signal:1 semi:2 relates:1 faster:5 match:2 calculation:1 cross:1 dept:2 icdm:2 variant:2 scalable:1 metric:1 iteration:19 kernel:1 mex:1 c1:3 singular:17 jian:1 crucial:1 appropriately:1 specially:1 file:1 subject:2 facilitates:1 december:1 call:3 near:1 presence:2 yang:1 intermediate:4 enough:2 variety:2 iterate:3 gave:2 restrict:1 hurwicz:1 det:1 motivated:3 url:3 suffer:1 remark:1 matlab:1 useful:2 detailed:4 ellipsoidal:1 concentrated:1 generate:7 http:3 outperform:2 xij:2 continuation:1 nsf:1 arising:4 mnk:3 discrete:1 four:2 threshold:5 achieving:1 graph:1 relaxation:2 run:1 package:1 hankel:1 almost:6 family:1 throughout:1 coherence:1 appendix:4 scaling:1 bound:10 apg:1 guaranteed:4 koren:2 quadratic:2 nontrivial:1 adapted:1 constraint:20 software:1 speed:2 argument:2 min:3 optimality:2 relatively:5 conjecture:3 according:2 piscataway:1 poor:1 remain:1 smaller:1 modification:1 intuitively:1 restricted:10 invariant:2 pr:2 sij:1 taken:3 asilomar:1 computationally:1 mcp:10 equation:5 needed:1 prajain:1 end:2 raghu:4 well3:1 pursuit:1 available:1 apply:2 svp:104 obey:1 observe:2 alternative:1 slower:1 existence:1 original:1 top:4 running:2 log2:1 newton:28 yoram:2 emmanuel:3 prof:1 classical:1 feng:1 objective:7 added:1 diagonal:3 lehoucq:1 gradient:12 subspace:1 distance:1 reason:2 khandekar:1 argmins:1 code:1 index:1 minimizing:4 unfortunately:1 mostly:1 robert:1 statement:3 taxonomy:1 trace:13 implementation:1 unknown:1 perform:2 discretize:1 recommender:1 observation:3 datasets:5 vkt:2 descent:6 supporting:1 defining:1 situation:1 extended:1 precise:1 rn:3 rating:2 introduced:1 pablo:1 pair:1 cast:1 specified:1 required:8 nu:1 nip:1 trans:1 address:1 beyond:1 able:2 below:1 shuiwang:1 challenge:1 program:1 including:1 power:10 natural:1 regularized:2 largescale:1 hindi:2 mn:1 scheme:2 movie:6 incoherent:21 philadelphia:1 chao:1 review:1 geometric:1 acknowledgement:1 sg:1 rohit:1 relative:1 law:9 bresler:6 impressively:1 filtering:2 validation:1 foundation:1 incurred:3 degree:1 affine:23 thresholding:6 sewoong:1 vij:1 maxt:3 austin:4 row:1 summary:1 supported:4 repeat:1 side:1 weaker:2 guide:1 india:1 sparse:3 distributed:5 tolerance:1 overcome:1 uzawa:2 world:2 author:1 collection:1 projected:5 approximate:3 informationtheoretic:1 implicitly:1 arpack:2 global:1 continuous:1 iterative:4 nature:1 robust:5 obtaining:2 improving:1 matys:1 diag:1 pk:6 main:4 montanari:1 arrow:1 bounding:1 noise:11 facilitating:2 slow:1 precision:1 candidate:1 young:1 theorem:24 showing:2 sensing:4 maxi:3 evidence:4 exists:5 quantization:1 corr:1 magnitude:1 execution:1 nk:2 chen:1 inderjit:4 restarted:1 satisfies:2 ma:4 oct:1 viewed:1 formulated:1 towards:1 considerable:2 hard:5 specifically:1 uniformly:9 lemma:18 lens:5 svd:9 experimental:1 e:3 brand:1 formally:1 accelerated:2 evaluate:7
3,207
3,905
Humans Learn Using Manifolds, Reluctantly Bryan R. Gibson, Xiaojin Zhu, Timothy T. Rogers? , Charles W. Kalish? , Joseph Harrison? Department of Computer Sciences, ? Psychology, and ? Educational Psychology University of Wisconsin-Madison, Madison, WI 53706 USA {bgibson, jerryzhu}@cs.wisc.edu {ttrogers, cwkalish, jcharrison}@wisc.edu Abstract When the distribution of unlabeled data in feature space lies along a manifold, the information it provides may be used by a learner to assist classification in a semi-supervised setting. While manifold learning is well-known in machine learning, the use of manifolds in human learning is largely unstudied. We perform a set of experiments which test a human?s ability to use a manifold in a semisupervised learning task, under varying conditions. We show that humans may be encouraged into using the manifold, overcoming the strong preference for a simple, axis-parallel linear boundary. 1 Introduction Consider a classification task where a learner is given training items x1 , . . . , xl ? Rd , represented by d-dimensional feature vectors. The learner is also given the corresponding class labels y1 , . . . , yl ? Y. In this paper, we focus on binary labels Y ? {?1, 1}. In addition, the learner is given some unlabeled items xl+1 , . . . , xl+u ? Rd without the corresponding labels. Importantly, the labeled and unlabeled items x1 . . . xl+u are distributed in a peculiar way in the feature space: they lie on smooth, lower dimension manifolds, such as those schematically shown in Figure 1(a). The question is: given this knowledge of labeled and unlabeled data, how will the learner classify xl+1 , . . . , xl+u ? Will the learner ignore the distribution information of the unlabeled data, and simply use the labeled data to form a decision boundary as in Figure 1(b)? Or will the learner propagate labels along the nonlinear manifolds as in Figure 1(c)? (a) the data (b) supervised learning (c) manifold learning Figure 1: On a dataset with manifold structure, supervised learning and manifold learning make dramatically different predictions. Large symbols represent labeled items, dots unlabeled items. When the learner is a machine learning algorithm, this question has been addressed by semisupervised learning [2, 11]. The designer of the algorithm can choose to make the manifold assumption, also known as graph-based semi-supervised learning, which states that the labels vary slowly along the manifolds or the discrete graph formed by connecting nearby items. Consequently, the learning algorithm will predict Figure 1(c). The mathematics of manifold learning is wellunderstood [1, 6, 9, 10]. Alternatively, the designer can choose to ignore the unlabeled data and perform supervised learning, which results in Figure 1(b). 1 When the learner is a human being, however, the answer is not so clear. Consider that the human learner does not directly see how the items are distributed in the feature space (such as Figure 1(a)), but only a set of items (such as those in Figure 2(a)). The underlying manifold structure of the data may not be immediately obvious. Thus there are many possibilities for how the human learner will behave: 1) They may completely ignore the manifold structure and perform supervised learning; 2) They may discover the manifold under some learning conditions and not others; or 3) They may always learn using the manifold. For readers not familiar with manifold learning, the setting might seem artificial. But in fact, many natural stimuli we encounter in everyday life are distributed on manifolds. An important example is face recognition, where different poses (viewing angles) of the same face produce different 2D images. These images can be quite different, as in the frontal and profile views of a person. However, if we continuously change the viewing angle, these 2D images will form a one-dimensional manifold in a very high dimensional image space. This example illustrates the importance of a manifold to facilitate learning: if we can form and maintain such a face manifold, then with a single label (e.g., the name) on one of the face images, we can recognize all other poses of that person by propagating the label along the manifold. The same is true for visual object recognition in general. Other more abstract stimuli form manifolds, or the discrete analogue, graphs. For example, text documents in a corpus occupy a potentially nonlinear manifold in the otherwise very high dimensional space used to represent them, such as the ?bag of words? representation. There exists little empirical evidence addressing the question of whether human beings can learn using manifolds when classifying objects, and the few studies we are aware of come to opposing conclusions. For instance, Wallis and B?ulthoff created artificial image sequences where a frontal face is morphed into the profile face of a different person. When participants were shown such sequences during training, their ability to match frontal and profile faces during testing was impaired [8]. This might be evidence that people depend on manifold structure stemming from temporal and spatial proximity to perform face recognition. On the other hand, Vandist et al. conducted a categorization experiment where the true decision boundary is at 45 degrees in a 2D stimulus space (i.e., an information integration task). They showed that when the two classes are elongated Gaussian, which are parallel to, and on opposite sides of, the decision boundary, unlabeled data does not help learning [7]. If we view these two elongated Gaussian as linear manifolds, this result suggests that people do not generally learn using manifolds. This study seeks to understand under what conditions, if any, people are capable of manifold learning in a semi-supervised setting. The study has important implications for cognitive psychology: first, if people are capable of learning manifolds, this suggests that manifold-learning models that have been developed in machine learning can provide hypotheses about how people categorize objects in natural domains like face recognition, where manifolds appear to capture the true structure of the domain. Second, if there are reliable methods for encouraging manifold learning in people, these methods can be employed to aid learning in other domains that are structured along manifolds. For machine learning, our study will help in the design of algorithms which can decide when to invoke the manifold learning assumption. 2 Human Manifold Learning Experiments We designed and conducted a set of experiments to study manifold learning in humans, with the following design considerations. First, the task was a ?batch learning? paradigm in which participants viewed all labeled and unlabeled items at once (in contrast to ?online? or sequential learning paradigm where items appear one at a time). Batch learning allows us to compare human behavior against well-established machine learning models that typically operate in batch mode. Second, we avoided using faces or familiar 3D objects as stimuli, despite their natural manifold structures as discussed above, because we wished to avoid any bias resulting from strong prior real-world knowledge. Instead, we used unfamiliar stimuli, from which we could add or remove a manifold structure easily. This design should allow our experiments to shed light on people?s intrinsic ability to learn using a manifold. Participants and Materials. In the first set of experiments, 139 university undergraduates participated for partial course credit. A computer interface was created to represent a table with three bins, as shown in Figure 2(a). Unlabeled cards were initially placed in a central white bin, with bins to 2 either side colored red and blue to indicate the two classes y ? {?1, 1}. Each stimulus is a card. Participants sorted cards by clicking and dragging with a mouse. When a card was clicked, other similar cards could be ?highlighted? in gray (depending on condition). Labeled cards were pinned down in their respective red or blue bins and could not be moved, indicated by a ?pin? in the corner of the card. The layout of the cards was such that all cards remained visible at all times. Unlabeled cards could be re-categorized at any time by dragging from any bin to any other bin. Upon sorting all cards, participants would click a button to indicating completion. Two sets of stimuli were created. The first, used solely to acquaint the participants with the interface, consisted of a set of 20 cards with animal line drawings on a white background. The images were chosen to approximate a linear continuum between fish and mammal, with shark, dolphin, and whale at the center. The second set of stimuli used for the actual experiment was composed of 82 ?crosshair? cards, each with a pair of perpendicular, axis-parallel lines, all of equal length, crossing on a white background. Four examples are shown in Figure 2(b). Each card therefore can be encoded as x ? [0, 1]2 , whose two features representing the positions of the vertical and horizontal lines, respectively. (a) Card sorting interface (b) x1 = (0, 0.1), x2 = (1, 0.9), x3 = (0.39, 0.41), x4 = (0.61, 0.59) Figure 2: Experimental interface (with highlighting shown), and example crosshair stimuli. Procedure. Each participant was given two tasks to complete. Task 1 was a practice task to familiarize the participant with the interface. The participant was asked to sort the set of 20 animal cards into two categories, with the two ends of the continuum (a clown fish and a dachshund) labeled. Participants were told that when they clicked on a card, highlighting of similar cards might occur. In reality, highlighting was always shown for the two nearest-neighboring cards (on the defined continuum) of a clicked card. Importantly, we designed the dataset so that, near the middle of the continuum, cards from opposite biological classes would be highlighted together. For example, when a dolphin was clicked, both a shark and a whale would be highlighted. The intention was to indicate to the participant that highlighting is not always a clear give-away for class labels. At the end of task 1 their fish vs. mammal classification accuracy was presented. No time limit was enforced. Task 2 asked the participant to sort a set of 82 crosshair cards into two categories. The set of cards, the number of labeled cards, and the highlighting of cards depended on condition. The participant was again told that some cards might be highlighted, whether the condition actually provided for highlighting or not. The participant was also told that cards that shared highlighting may not all have the same classification. Again, no time limit was enforced. After they completed this task, a follow up questionnaire was administered. Conditions. Each of the 139 participants was randomly assigned to one of 6 conditions, shown in Figure 3, which varied according to three manipulations: The number of labeled items l can be 2 or 4 (2l vs. 4l ). For conditions with two labeled items, the labeled items are always (x1 , y1 = ?1), (x2 , y2 = 1); with four labeled items, they are always (x1 , y1 = ?1), (x2 , y2 = 1), (x3 , y3 = 1), (x4 , y4 = ?1). The features of x1 . . . x4 are those given in Figure 2(b). We chose these four labeled points by maximizing the prediction differences made by seven machine learning models, as discussed in the next section. 3 Unlabeled items are distributed on a uniform grid or manifolds (gridU vs. moonsU ). The items x5 . . . x82 were either on a uniform grid in the 2D feature space, or along two ?half-moons?, which is a well-studied dataset in the semi-supervised learning community. No linear boundary can separate the two moons in feature space. x3 and x4 , if unlabeled, are the same as in Figure 2(b). Highlighting similar items or not (the suffix h). For the moonsU conditions, the neighboring cards of any clicked card may be highlighted. The neighborhood is defined as within a radius of ? = 0.07 in the Euclidean feature space. This value was chosen as it includes at least two neighbors for each point in the moonsU dataset. To form the unweighted graph shown in Figure 3, an edge is placed between all neighboring points. The rationale for comparing these different conditions will become apparent as we consider how different machine-learning models perform on these datasets. 1 1 1 1 1 1 0.8 0.8 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0.2 0 0 0.2 0.4 0.6 0.8 1 2l gridU 8 participants 0 0 0.2 0.4 0.6 0.8 1 2l moonsU 8 participants 0 0 0.2 0.4 0.6 0.8 0 1 2l moonsU h 8 participants 0 0.2 0.4 0.6 0.8 1 4l gridU 22 participants 0 0.2 0 0.2 0.4 0.6 0.8 1 4l moonsU 24 participants 0 0 0.2 0.4 0.6 0.8 1 4l moonsU h 23 participants Figure 3: The six experimental conditions. Large symbols indicate labeled items, dots unlabeled items. Highlighting is represented as graph edges. 3 Model Predictions We hypothesize that human participants consider a set of models ranging from simple to sophisticated, and that they will perform model selection based on the training data given to them. We start by considering seven typical machine learning models to motivate our choice, and present the models we actually use later on. The seven models are: (graph) Graph-based semi-supervised learning [1, 10], which propagates labels along the graph. It reverts to supervised learning when there is no graph (i.e., no highlighting). (1NN,?2 ) 1-nearest-neighbor classifier with ?2 (Euclidean) distance. (1NN,?1 ) 1-nearest-neighbor classifier with ?1 (Manhattan) distance. These two models are similar to exemplar models in psychology [3]. (multi-v) multiple vertical linear boundaries. (multi-h) multiple horizontal linear boundaries. (single-v) a single vertical linear boundary. (single-h) a single horizontal linear boundary. We plot the label predictions by these 7 models on four of the six conditions in Figure 4. Their predictions on 2l moonsU are identical to 2l moonsU h, and on 4l moonsU are identical to 4l moonsU h, except that ?(graph)? is not available. For conceptual simplicity and elegance, instead of using these disparate models we adopt a single model capable of making all these predictions. In particular, we use a Gaussian Process (GP) with different kernels (i.e., covariance functions) k to simulate the seven models. For details on GPs see standard textbooks such as [4]. In particular, we find seven different kernels k to match GP classification to each of the seven model predictions on all 6 conditions. This is somewhat unusual in that our GPs are not learned from data, but by matching other model predictions. Nonetheless, it is a valid procedure to create seven different GPs which will later be compared against human data. For models (1NN,?2 ), (multi-v), (multi-h), (single-v), and (single-h), we use diagonal RBF kernels diag(?12 , ?22 ) and tune ?1 , ?2 on a coarse parameter grid to minimize classification disagreement w.r.t. the corresponding model prediction on all 6 conditions. For model (1NN,?1 ) we use a Laplace kernel and tune its bandwidth. For model (graph), we produce a graph kernel k? following the Reproducing Kernel Hilbert Space trick in [6]. That is, we extend a base RBF kernel k with a graph component: ? z) = k(x, z) ? k? (I + cLK)?1 cLkz k(x, (1) x where x, z are two arbitrary items (not necessarily on the graph), kx = (k(x, x1 ), . . . , k(x, xl+u ))? is the kernel vector between x and all l + u points x1 . . . xl+u in the graph, K is the (l + u) ? (l + u) Gram matrix with Kij = k(xi , xj ), L is the unnormalized graph Laplacian matrix derived from unweighted edges on the ?NN graph defined earlier for highlighting, and c is the parameter that we tune. We take the base RBF kernel k to be the tuned kernel for model (1NN,?2 ). It can be shown that 4 k? is a valid kernel formed by warping the base kernel k along the graph, see [6] for technical details. We used the GP classification implementation with Expectation Propagation approximation [5]. In the end, our seven GPs were able to exactly match the predictions made by the seven models in Figure 4. We will use these GPs in the rest of the paper. (graph) 2 2 l 4 U - grid U moons h 4 l l l (1NN,?2 ) (multi-h) (single-v) (single-h) 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 0 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0.5 1 - grid moons h (multi-v) 1 1 U U (1NN,?1 ) 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 0 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 0 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 0 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 Figure 4: Predictions made by the seven models on 4 of the 6 conditions. 4 Behavioral Experiment Results We now compare human categorization behaviors to model predictions. We first consider the aggregate behavior for all participants within each condition. One way to characterize this aggregate behavior is the ?majority vote? of the participants on each item. That is, if more than half of the participants classified an item as y = 1, the majority vote classification for that item is y = 1, and so on. The first row in Figure 5 shows the majority vote for each condition. In these and all further plots, blue circles indicate y = ?1, red pluses y = 1, and green stars ambiguous, meaning the classification into positive or negative is half-half. We also compute how well the seven GPs predict human majority votes. The accuracies of these GP models are shown in Table 11 . 2l gridU 2l moonsU 2l moonsU h 4l gridU 4l moonsU 4l moonsU h 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0.5 1 0 0 0.5 1 0 0 0.5 0 1 0 0.5 1 0 0 0.5 1 0 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 1 1 1 1 1 1 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0.5 1 0 0 0.5 1 0 0 0.5 0 1 0 0.5 1 0 0 0.5 1 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 Figure 5: Human categorization results. (First row) the majority vote of participants within each condition. (Bottom three rows) a sample of responses from 18 different participants. Of course, a majority vote only reveals average behavior. We have observed that there are wide participant variabilities. Participants appeared to find the tasks difficult, as their self-reported confidence scores were fairly low in all conditions. It was also noted that strategies for completing the 1 The condition 4l moonsU hR will be explained later in Section 5. 5 2l gridU 2l moonsU 2l moonsU h 4l gridU 4l moonsU 4l moonsU h 4l moonsU hR (graph) 0.81 0.47 0.50 0.54 0.64 0.97 0.68 (1NN,?2 ) 0.94 0.84 0.78 0.61 0.62 0.76 0.63 (1NN,?1 ) 0.84 0.62 0.56 0.64 0.60 0.54 0.44 (multi-v) 0.86 0.74 0.76 0.64 0.69 0.64 0.56 (multi-h) 0.58 0.42 0.36 0.50 0.47 0.31 0.40 (single-v) 0.85 0.79 0.76 0.60 0.38 0.65 0.59 (single-h) 0.61 0.45 0.39 0.51 0.45 0.26 0.42 Table 1: GP model accuracy in predicting human majority vote for each condition. task varied widely, with some participant simply categorizing cards in the order they appeared on the screen, while others took a much longer, studied approach. Most interestingly, different participants seem to use different models, as the individual participant plots in the bottom three rows of Figure 5 suggest. We would like to be able to make a claim about what model, from our set of models, each participant used for classification. In order to do this, we compute per participant accuracies of the seven models on that participant?s classification. We then find the model M with the highest accuracy for the participant, out of the seven models. If this highest accuracy is above 0.75, we declare that the participant is potentially using model M ; otherwise no model is deemed a good fit and we say the participant is using some ?other? model. We show the proportion of participants in each condition attributed to each of our seven models, plus ?other?, in Table 2. 2l gridU 2l moonsU 2l moonsU h 4l gridU 4l moonsU 4l moonsU h 4l moonsU hR (graph) 0.12 0.00 0.12 0.00 0.25 0.39 0.13 (1NN,?2 ) 0.00 0.12 0.00 0.05 0.25 0.09 0.03 (1NN,?1 ) 0.12 0.00 0.00 0.09 0.12 0.09 0.07 (multi-v) 0.25 0.25 0.38 0.00 0.12 0.04 0 (multi-h) 0.25 0.25 0.25 0.00 0.00 0.04 0 (single-v) 0.12 0.25 0.00 0.18 0.04 0.00 0.07 (single-h) 0.00 0.00 0.00 0.09 0.08 0.13 0.03 other 0.12 0.12 0.25 0.59 0.38 0.22 0.67 Table 2: Percentage of participants potentially using each model Based on Figure 5, Table 1, and Table 2, we make some observations: 1. When there are only two labeled points, the unlabeled distribution does not encourage humans to perform manifold learning (comparing 2l gridU vs. 2l moonsU ). That is, they do not follow the possible implicit graph structure (2l moonsU ). Instead, in both conditions they prefer a simple single vertical or horizontal decision boundary, as Table 2 shows2 . 2. With two labeled points, even if they are explicitly given the graph structure in the form of highlighting, participants still do not perform manifold learning (comparing 2l moonsU vs. 2l moonsU h). It seems they are ?blocked? by the simpler vertical or horizontal hypothesis, which perfectly explains the labeled data. 3. When there are four labeled points but no highlighting, the distribution of unlabeled data still does not encourage people to perform manifold learning (comparing 4l gridU vs. 4l moonsU ). This further suggests that people can not easily extract manifold structure from unlabeled data in order to learn, when there is no hint to do so. However, most participants have given up the simple single vertical or horizontal decision boundary, because it contradicts with the four labeled points. 4. Finally, when we provide the graph structure, there is a marked switch to manifold learning (comparing 4l moonsU vs. 4l moonsU h). This suggests that a combination of the elimination of preferred, simpler hypotheses, together with a stronger graph hint, finally gives the originally less preferred manifold learning model a chance of being used. It is under this condition that we observed human manifold learning behavior. 2 The two rows in Table 1 for these two conditions are therefore misleading, as it averages classification made with vertical and horizontal decision boundaries. Also note that in the 2l conditions (multi-v) and (multi-h) are effectively single linear boundary models (see Figure 4) and differ from (single-v) and (single-h) only slightly due to the training method used. 6 5 Humans do not Blindly Follow the Highlighting Do humans really learn using manifolds? Could they have adopted a ?follow-the-highlighting? procedure to label the manifolds 100% correctly: in the beginning, click on a labeled card x to highlight its neighboring unlabeled cards; pick one such neighbor x? and classify it with the label of x; now click on (the now labeled) x? to find one of its unlabeled neighbors x?? , and repeat? Because our graph has disconnected components with consistently labeled seeds, this procedure will succeed. The procedure is known as propagating-1NN in semi-supervised learning (Algorithm 2.7, [11]). In this section we present three arguments that humans are not blindly following the highlighting. First, participants in 2l moonsU h did not learn the manifold while those in 4l moonsU h did, even though the two conditions have the same ?NN highlighting. Second, a necessary condition for follow-the-highlighting is to always classify an unlabeled x? according to a labeled highlighted neighbor x. Conversely, if a participant classifies x? as class y ? , while all neighbors of x? are either still unlabeled or have labels other than y ? , she could not have been using follow-the-highlighting on x? . We say she has taken a leap-of-faith on x? . The 4l moonsU h participants had an average of 17 leaps-of-faith among about 78 classifications3 , while strict follow-the-highlighting procedure would yield zero leaps-of-faith. Third, the basic challenge of follow-the-highlighting is that the underlying manifold structure of the stimuli may have been irrelevant. Would participants have shown the same behavior, following the highlighting, regardless of the actual stimuli? We therefore designed the following experiment. Take the 4l moonsU h graph which has 4 labeled nodes, 78 unlabeled nodes, and an adjacency matrix (i.e., edges) defined by ?NN, as shown in Figure 3. Take a random permutation ? = (?1 , . . . , ?78 ). Map the feature vector of the ith unlabeled point to x?i , while keeping the adjacency matrix the same. This creates the random-looking graph in Figure 6(a) which we call 4l moonsU hR condition (the suffix R stands for random), which is equivalent to the 4l moonsU h graph in structure. In particular, there are two connected components with consistent labeled seeds. However, now the highlighted neighbors may look very different than the clicked card. If we assume humans blindly follow the highlighting (perhaps noisily), then we predict that they are more likely to classify those unlabeled points nearer (in shortest path length on the graph, not Euclidean distance) a labeled point with the latter?s label; and that this correlation should be the same under 4l moonsU hR and 4l moonsU h. This prediction turns out to be false. 30 additional undergraduates participated in the new 4l moonsU hR condition. Figure 6(b) shows the above behavioral evaluation, which does not exhibit the predicted correlation, and is clearly different from the same evaluation for 4l moonsU h in Figure 6(c). Again, this is evidence that humans are not just following the highlighting. In fact, human behavior in 4l moonsU hR is similar to 4l moonsU . That is, having random highlighting is similar to having no highlighting in how it affects human categorization. This can be seen from the last rows of Tables 1 and 2, and Figure 6(d)4 . 6 Discussion We have presented a set of experiments exploring human manifold learning behaviors. Our results suggest that people can perform manifold learning, but only when there is no alternative, simpler explanation of the data, and people need strong hints about the graph structure. We propose that Bayesian model selection is one possible way to explain these human behaviors. Recall we defined seven Gaussian Processes, each with a different kernel. For a given GP with kernel k, the evidence p(y1:l | x1:l , k) is the marginal likelihood on labeled data, integrating out the hidden discriminant function sampled from the GP. With multiple candidate GP models, one may perform model selection by selecting the one with the largest marginal likelihood. From the absence of manifold learning in conditions without highlighting or with random highlighting, we speculate that the GP with the graph-based kernel k? (1) is special: it is accessible in a participant?s repertoire 3 The individual number of leaps-of-faith are 0, 1, 2, 4, 10, 13, 13, 14, 14, 15, 15, 16, 18, 19, 20, 21, 22, 24, 25, 27, 33, 36, and 36 respectively, for the 23 participants. 4 In addition, if we create a GP from the Laplacian of the random highlighting graph, the GP accuracy in predicting 4l moonsU hR human majority vote is 0.46, and the percentage of participants in 4l moonsU hR who can be attributed to this model is 0. 7 0.6 0.4 0.2 0 1 0.9 0.8 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.4 0.6 (a) 0.8 1 0.6 0.5 0.4 0.2 0.1 4 6 8 10 12 0 0 14 shortest path length 0.5 0.3 0.1 2 1 0.7 0.2 0 0 0 empirical accuracy empirical accuracy 1 0.8 1 0.9 2 4 6 8 10 shortest path length (b) (c) 12 14 0 0 0.5 1 (d) Figure 6: The 4l moonsU hR experiment with 30 participants. (a) The 4l moonsU hR condition. (b) The behavioral evaluation for 4l moonsU hR , where the x-axis is the shortest path length of an unlabeled point to a labeled point, and the y-axis is the fraction of participants who classified that unlabeled point consistent with the nearest labeled point. (c) The same behavioral evaluation for 4l moonsU h. (d) The majority vote in 4l moonsU hR . only when strong hints (highlighting) exists and agrees with the underlying unlabeled data manifold structure. Under this assumption, we can then explain the contrast between the lack of manifold learning in 2l moonsU h, and the presence of manifold learning in 4l moonsU h. On one hand, for the 2l moonsU h condition, the evidence for the seven GP models on the two labeled points are: (graph) 0.249, (1NN,?2 ) 0.250, (1NN,?1 ) 0.250, (multi-v) 0.250, (multi-h) 0.250, (single-v) 0.249, (singleh) 0.249. The graph-based GP has slightly lower evidence than several other GPs, which may be due to our specific choice of kernel parameters in (1). In any case, there is no reason to prefer the GP with a graph kernel, and we do not expect humans to learn on manifold in 2l moonsU h. On the other hand, for 4l moonsU h, the evidence for the seven GP models on those four labeled points are: (graph) 0.0626, (1NN,?2 ) 0.0591, (1NN,?1 ) 0.0625, (multi-v) 0.0625, (multi-h) 0.0625, (single-v) 0.0341, (single-h) 0.0342. The graph-based GP has a small lead over other GPs. In particular, it is better than the evidence 1/16 for kernels that treat the four labeled points essentially independently. The graph-based GP obtains this lead by warping the space along the two manifolds so that the two positive (resp. negative) labeled points tend to co-vary. Thus, there is a reason to prefer the GP with a graph kernel, and we do expect humans to learn on manifold in 4l moonsU h. We also explore the convex combination of the seven GPs as a richer model for human behavior: P7 P k(?) = i=1 ?i ki , where ?i ? 0, i ?i = 1. This allows a weighted combination of kernels to be used, and is more powerful than selecting a single kernel. Again, we optimize the mixing weights ? by maximizing the evidence p(y1:l | x1:l , k(?)). This is a constrained optimization problem, and can be easily solved up to local optimum (because evidence is in general non-convex) with a projected gradient method, given the gradient of the log evidence. For the 2l moonsU h condition, in 100 trials with random starting ? values, the maximum evidence always converges to 1/4, while the optimum ? is not unique and occupies a subspace (0, ?2 , ?3 , ?4 , ?5 , 0, 0) with ?2 +?3 +?4 +?5 = 1 and mean (0, 0.27, 0.25, 0.22, 0.26, 0, 0). Note the weight for the graph-based kernel ?1 is zero. In contrast, for the 4l moonsU h condition, in 100 trials ? overwhelmingly converges to (1, 0, 0, 0, 0, 0, 0) with evidence 0.0626. i.e., it again suggests that people would perform manifold learning in 4l moonsU h. Of course, this Bayesian model selection analysis is over-simplified. For instance, we did not consider people?s prior p(?) on GP models, i.e., which model they would prefer before seeing the data. It is possible that humans favor models which produce axis-parallel decision boundaries. Defining and incorporating non-uniform p(?) priors is a topic for future research. Acknowledgments We thank Rob Nowak and the anonymous reviewers for their valuable comments that motivated us to conduct the new experiments discussed in Section 5 after initial review. This work is supported in part by NSF IIS-0916038, NSF IIS-0953219, NSF DRM/DLS-0745423, and AFOSR FA9550-09-1-0313. References [1] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399?2434, November 2006. [2] Olivier Chapelle, Bernhard Sch?olkopf, and Alexander Zien, editors. Semi-supervised learning. MIT Press, 2006. 8 [3] R. M. Nosofsky. Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115(1):39?57, 1986. [4] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [5] Carl E. Rasmussen and Christopher K. I. Williams. GPML http://www.gaussianprocess.org/gpml/code/matlab/doc/, accessed May, 2010. matlab code, 2007. [6] Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. Beyond the point cloud: from transductive to semi-supervised learning. In ICML05, 22nd International Conference on Machine Learning, 2005. [7] Katleen Vandist, Maarten De Schryver, and Yves Rosseel. Semisupervised category learning: The impact of feedback in learning the information-integration task. Attention, Perception, & Psychophysics, 71(2):328?341, 2009. [8] Guy Wallis and Heinrich H. B?ulthoff. Effects of temporal association on recognition memory. Proceedings of the National Academy of Sciences, 98(8):4800?4804, 2001. [9] Dengyong Zhou, Olivier Bousquet, Thomas Lal, Jason Weston, and Bernhard Sch?lkopf. Learning with local and global consistency. In Advances in Neural Information Processing System 16, 2004. [10] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In The 20th International Conference on Machine Learning (ICML), 2003. [11] Xiaojin Zhu and Andrew B. Goldberg. Introduction to Semi-Supervised Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, San Rafael, CA, 2009. 9
3905 |@word trial:2 middle:1 stronger:1 proportion:1 seems:1 nd:1 seek:1 propagate:1 covariance:1 pick:1 mammal:2 initial:1 score:1 selecting:2 tuned:1 document:1 interestingly:1 comparing:5 john:1 stemming:1 visible:1 remove:1 designed:3 hypothesize:1 plot:3 v:7 half:4 intelligence:1 item:23 p7:1 beginning:1 ith:1 fa9550:1 colored:1 provides:1 coarse:1 node:2 preference:1 org:1 simpler:3 accessed:1 along:9 become:1 behavioral:4 behavior:11 multi:16 little:1 encouraging:1 actual:2 considering:1 clicked:6 provided:1 discover:1 underlying:3 classifies:1 what:2 textbook:1 developed:1 temporal:2 y3:1 shed:1 exactly:1 classifier:2 appear:2 positive:2 declare:1 before:1 local:2 treat:1 depended:1 limit:2 despite:1 solely:1 path:4 might:4 chose:1 plus:2 studied:2 suggests:5 conversely:1 co:1 perpendicular:1 ulthoff:2 unique:1 acknowledgment:1 testing:1 practice:1 x3:3 procedure:6 gibson:1 empirical:3 matching:1 word:1 intention:1 confidence:1 integrating:1 seeing:1 suggest:2 zoubin:1 unlabeled:28 selection:4 optimize:1 equivalent:1 elongated:2 map:1 center:1 maximizing:2 reviewer:1 educational:1 layout:1 regardless:1 independently:1 convex:2 starting:1 attention:2 williams:2 simplicity:1 immediately:1 importantly:2 maarten:1 laplace:1 resp:1 olivier:2 gps:9 carl:2 goldberg:1 hypothesis:3 trick:1 crossing:1 recognition:5 labeled:34 bottom:2 observed:2 cloud:1 solved:1 capture:1 connected:1 highest:2 valuable:1 questionnaire:1 asked:2 heinrich:1 motivate:1 depend:1 upon:1 creates:1 learner:11 completely:1 easily:3 represented:2 artificial:3 aggregate:2 neighborhood:1 quite:1 encoded:1 whose:1 apparent:1 widely:1 say:2 drawing:1 otherwise:2 richer:1 ability:3 favor:1 niyogi:2 gp:19 highlighted:7 transductive:1 kalish:1 online:1 sequence:2 took:1 propose:1 neighboring:4 mixing:1 academy:1 moved:1 faith:4 everyday:1 olkopf:1 dolphin:2 impaired:1 optimum:2 produce:3 categorization:5 converges:2 object:4 help:2 depending:1 dengyong:1 completion:1 pose:2 propagating:2 andrew:1 exemplar:1 nearest:4 wished:1 strong:4 c:1 predicted:1 come:1 indicate:4 differ:1 radius:1 human:33 occupies:1 viewing:2 rogers:1 material:1 bin:6 pinned:1 explains:1 elimination:1 adjacency:2 really:1 anonymous:1 repertoire:1 biological:1 exploring:1 proximity:1 credit:1 claypool:1 seed:2 predict:3 claim:1 vary:2 continuum:4 adopt:1 bag:1 label:14 leap:4 gaussianprocess:1 largest:1 agrees:1 create:2 weighted:1 mit:2 clearly:1 gaussian:6 always:7 avoid:1 zhou:1 varying:1 overwhelmingly:1 gpml:2 categorizing:1 derived:1 focus:1 she:2 consistently:1 dragging:2 likelihood:2 contrast:3 suffix:2 nn:19 typically:1 initially:1 hidden:1 classification:12 among:1 animal:2 spatial:1 integration:2 fairly:1 special:1 marginal:2 field:1 equal:1 aware:1 once:1 having:2 psychophysics:1 encouraged:1 whale:2 x4:4 familiarize:1 identical:2 look:1 icml:1 future:1 others:2 stimulus:11 hint:4 few:1 belkin:2 randomly:1 drm:1 composed:1 recognize:1 national:1 individual:2 familiar:2 maintain:1 opposing:1 possibility:1 evaluation:4 light:1 implication:1 peculiar:1 edge:4 capable:3 partial:1 encourage:2 necessary:1 nowak:1 respective:1 conduct:1 euclidean:3 re:1 circle:1 kij:1 instance:2 classify:4 earlier:1 jerryzhu:1 addressing:1 uniform:3 conducted:2 characterize:1 reported:1 answer:1 person:3 international:2 accessible:1 told:3 yl:1 invoke:1 connecting:1 continuously:1 mouse:1 together:2 nosofsky:1 synthesis:1 again:5 central:1 choose:2 slowly:1 guy:1 cognitive:1 corner:1 de:1 star:1 speculate:1 includes:1 explicitly:1 later:3 view:2 jason:1 red:3 start:1 sort:2 participant:51 parallel:4 partha:2 minimize:1 formed:2 yves:1 accuracy:9 moon:4 largely:1 who:2 yield:1 lkopf:1 bayesian:2 identification:1 classified:2 explain:2 against:2 x82:1 nonetheless:1 obvious:1 elegance:1 attributed:2 sampled:1 dataset:4 recall:1 knowledge:2 hilbert:1 sophisticated:1 actually:2 originally:1 supervised:15 follow:9 response:1 though:1 just:1 implicit:1 correlation:2 hand:3 horizontal:7 christopher:2 nonlinear:2 propagation:1 lack:1 mode:1 gray:1 indicated:1 perhaps:1 semisupervised:3 facilitate:1 effect:1 name:1 usa:1 true:3 consisted:1 y2:2 www:1 regularization:1 assigned:1 white:3 x5:1 during:2 self:1 ambiguous:1 noted:1 unnormalized:1 complete:1 interface:5 image:7 ranging:1 consideration:1 meaning:1 harmonic:1 charles:1 discussed:3 extend:1 association:1 unfamiliar:1 blocked:1 morphed:1 rd:2 grid:5 mathematics:1 consistency:1 had:1 dot:2 chapelle:1 longer:1 similarity:1 add:1 base:3 showed:1 noisily:1 irrelevant:1 manipulation:1 binary:1 life:1 seen:1 morgan:1 additional:1 somewhat:1 employed:1 paradigm:2 shortest:4 wellunderstood:1 semi:10 ii:2 multiple:3 zien:1 smooth:1 technical:1 match:3 laplacian:2 impact:1 prediction:13 basic:1 essentially:1 expectation:1 blindly:3 represent:3 kernel:22 addition:2 schematically:1 participated:2 background:2 addressed:1 harrison:1 publisher:1 sch:2 operate:1 rest:1 strict:1 comment:1 tend:1 lafferty:1 seem:2 call:1 near:1 presence:1 switch:1 xj:1 fit:1 psychology:5 affect:1 bandwidth:1 opposite:2 click:3 perfectly:1 administered:1 whether:2 six:2 motivated:1 assist:1 icml05:1 matlab:2 dramatically:1 generally:1 clear:2 tune:3 reluctantly:1 category:3 http:1 occupy:1 percentage:2 nsf:3 fish:3 designer:2 per:1 clk:1 bryan:1 blue:3 correctly:1 discrete:2 four:8 wisc:2 graph:41 button:1 fraction:1 enforced:2 angle:2 powerful:1 reader:1 decide:1 shark:2 doc:1 decision:7 prefer:4 ki:1 completing:1 occur:1 x2:3 nearby:1 bousquet:1 simulate:1 argument:1 department:1 structured:1 according:2 combination:3 disconnected:1 slightly:2 contradicts:1 wi:1 joseph:1 rob:1 making:1 constrained:1 explained:1 taken:1 pin:1 turn:1 end:3 unusual:1 adopted:1 available:1 away:1 disagreement:1 batch:3 encounter:1 alternative:1 vikas:2 thomas:1 completed:1 madison:2 ghahramani:1 warping:2 question:3 strategy:1 diagonal:1 exhibit:1 gradient:2 subspace:1 distance:3 separate:1 card:33 thank:1 majority:9 seven:18 manifold:65 topic:1 discriminant:1 reason:2 length:5 code:2 y4:1 relationship:1 difficult:1 potentially:3 negative:2 disparate:1 design:3 implementation:1 perform:12 vertical:7 observation:1 datasets:1 november:1 behave:1 defining:1 variability:1 looking:1 y1:5 varied:2 reproducing:1 arbitrary:1 clown:1 community:1 overcoming:1 pair:1 lal:1 learned:1 established:1 nearer:1 acquaint:1 able:2 beyond:1 perception:1 appeared:2 challenge:1 reverts:1 reliable:1 green:1 explanation:1 memory:1 analogue:1 natural:3 predicting:2 hr:13 zhu:3 representing:1 misleading:1 axis:5 created:3 deemed:1 extract:1 xiaojin:3 text:1 prior:3 review:1 geometric:1 wisconsin:1 manhattan:1 afosr:1 expect:2 highlight:1 rationale:1 permutation:1 lecture:1 degree:1 consistent:2 propagates:1 editor:1 classifying:1 row:6 course:3 placed:2 repeat:1 keeping:1 last:1 supported:1 rasmussen:2 side:2 bias:1 understand:1 allow:1 neighbor:8 wide:1 face:10 mikhail:2 distributed:4 boundary:14 dimension:1 feedback:1 world:1 valid:2 unweighted:2 gram:1 stand:1 made:4 projected:1 avoided:1 simplified:1 san:1 unstudied:1 approximate:1 ignore:3 preferred:2 obtains:1 bernhard:2 rafael:1 global:1 reveals:1 corpus:1 conceptual:1 xi:1 alternatively:1 table:10 reality:1 moonsu:61 learn:10 ca:1 necessarily:1 domain:3 diag:1 did:3 profile:3 categorized:1 x1:10 screen:1 aid:1 position:1 xl:8 lie:2 clicking:1 candidate:1 third:1 down:1 remained:1 specific:1 symbol:2 evidence:13 dl:1 exists:2 intrinsic:1 undergraduate:2 false:1 sequential:1 effectively:1 importance:1 incorporating:1 illustrates:1 kx:1 sorting:2 timothy:1 simply:2 likely:1 explore:1 visual:1 highlighting:30 sindhwani:2 chance:1 succeed:1 weston:1 viewed:1 sorted:1 marked:1 consequently:1 rbf:3 shared:1 absence:1 change:1 ttrogers:1 typical:1 except:1 wallis:2 experimental:3 vote:9 indicating:1 people:13 latter:1 categorize:1 alexander:1 frontal:3
3,208
3,906
A New Probabilistic Model for Rank Aggregation Tao Qin Microsoft Research Asia [email protected] Xiubo Geng Chinese Academy of Sciences [email protected] Tie-Yan Liu Microsoft Research Asia [email protected] Abstract This paper is concerned with rank aggregation, which aims to combine multiple input rankings to get a better ranking. A popular approach to rank aggregation is based on probabilistic models on permutations, e.g., the Luce model and the Mallows model. However, these models have their limitations in either poor expressiveness or high computational complexity. To avoid these limitations, in this paper, we propose a new model, which is defined with a coset-permutation distance, and models the generation of a permutation as a stagewise process. We refer to the new model as coset-permutation distance based stagewise (CPS) model. The CPS model has rich expressiveness and can therefore be used in versatile applications, because many different permutation distances can be used to induce the coset-permutation distance. The complexity of the CPS model is low because of the stagewise decomposition of the permutation probability and the efficient computation of most coset-permutation distances. We apply the CPS model to supervised rank aggregation, derive the learning and inference algorithms, and empirically study their effectiveness and efficiency. Experiments on public datasets show that the derived algorithms based on the CPS model can achieve state-ofthe-art ranking accuracy, and are much more efficient than previous algorithms. 1 Introduction Rank aggregation aims at combining multiple rankings of objects to generate a better ranking. It is the key problem in many applications. For example, in meta search [1], when users issue a query, the query is sent to several search engines and the rankings given by them are aggregated to generate more comprehensive ranking results. Given the underlying correspondence between ranking and permutation, probabilistic models on permutations, originated in statistics [19, 5, 4], have been widely applied to solve the problems of rank aggregation. Among different models, the Mallows model [15, 6] and the Luce model [14, 18] are the most popular ones. The Mallows model is a distance-based model, which defines the probability of a permutation according to its distance to a location permutation. Due to many applicable permutation distances, the Mallows model has very rich expressiveness, and therefore can be potentially used in many different applications. Its weakness lies in the high computational complexity. In many cases, it requires a time complexity of O(n!) to compute the probability of a single permutation of n objects. This is clearly intractable when we need to rank a large number of objects in real applications. The Luce model is a stagewise model, which decomposes the process of generating a permutation of n objects into n sequential stages. At the k-th stage, an object is selected and assigned to position k 1 according to a probability based on the scores of the unassigned objects. The product of the selection probabilities at all the stages defines the probability of the permutation. The Luce model is highly efficient (with a polynomial time complexity) due to the decomposition. The expressiveness of the Luce model, however, is limited because it is defined on the scores of individual objects and cannot leverage versatile distance measures between permutations. In this paper, we propose a new probabilistic model on permutations, which inherits the advantages of both the Luce model and the Mallows model and avoids their limitations. We refer to the model as coset-permutation distance based stagewise (CPS) model. Different from the Mallows model, the CPS model is a stagewise model. It decomposes the generative process of a permutation ? into sequential stages, which makes the efficient computation possible. At the k-th stage, an object is selected and assigned to position k with a certain probability. Different from the Luce model, the CPS model defines the selection probability based on the distance between a location permutation ? and the right coset of ? (referred to as coset-permutation distance) at each stage. In this sense, it is also a distance-based model. Because many different permutation distances can be used to induce the coset-permutation distance, the CPS model also has rich expressiveness. Furthermore, the coset-permutation distances induced by many popular permutation distances can be computed with polynomial time complexity, which further ensures the efficiency of the CPS model. We then apply the CPS model to supervised rank aggregation and derive corresponding algorithms for learning and inference of the model. Experiments on public datasets show that the CPS model based algorithms can achieve state-of-the-art ranking accuracy, and are much more efficient than baseline methods based on previous probabilistic models. 2 Background 2.1 Rank Aggregation There are mainly two kinds of rank aggregation, i.e., score-based rank aggregation [17, 16] and order-based rank aggregation [2, 7, 3]. In the former, objects in the input rankings are associated with scores, while in the latter, only the order information of these objects is available. In this work, we focus on the order-based rank aggregation, because it is more popular in real applications [7], and score-based rank aggregation can be easily converted to order-based rank aggregation by ignoring the additional score information [7]. Early methods for rank aggregation are heuristic based. For example, BordaCount [2, 7] and median rank aggregation [8] are simply based on average rank positions or the number of pairwise wins. In the recent literature, probabilistic models on permutations, such as the Mallows model and the Luce model, have been introduced to solve the problem of rank aggregation. Previous studies have shown that the probabilistic model based algorithms can outperform the heuristic methods in many settings. For example, the Mallows model has been shown very effective in both supervised rank aggregation and unsupervised rank aggregation, and the effectiveness of the Luce model has been demonstrated in the context of unsupervised rank aggregation. In the next subsection, we will describe these two models in more detail. 2.2 Probabilistic Models on Permutations In order to better illustrate the probabilistic models on permutations, we first introduce some concepts and notations. Let {1, 2, . . . , n} be a set of objects to be ranked. A ranking/permutation1 ? is a bijection from {1, 2, . . . , n} to itself. We use ?(i) to denote the position given to object i and ? ?1 (i) to denote the object assigned to position i. We usually write ? and ? ?1 as vectors whose i-th component is ?(i) and ? ?1 (i), respectively. We also use the bracket alternative notation to represent a permutation, i.e., ? = ?? ?1 (1), ? ?1 (2), . . . , ? ?1 (n)?. The collection of all permutations of n objects forms a non-abelian group under composition, called the symmetric group of order n, denoted as Sn . Let Sn?k denote the subgroup of Sn consisting of 1 We will interchangeably use the two terms in the paper. 2 all permutations whose first k positions are fixed: Sn?k = {? ? Sn |?(i) = i, ?i = 1, . . . , k}. (1) The right coset Sn?k ? = {??|? ? Sn?k } is a subset of permutations whose top-k objects are exactly the same as in ?. In other words, Sn?k ? = {?|? ? Sn , ? ?1 (i) = ? ?1 (i), ?i = 1, . . . , k}. We also use Sn?k (?i1 , i2 , . . . , ik ?) to denote the right coset with object i1 in position 1, i2 in position 2, . . . , and ik in position k. The Mallows model is a distance based probabilistic model on permutations. It uses a permutation distance d on the symmetric group Sn to define the probability of a permutation: P (?|?, ?) = 1 exp(??d(?, ?)), Z(?, ?) where ? ? Sn is the location permutation, ? ? R is a dispersion parameter, and ? Z(?, ?) = exp(??d(?, ?)). (2) (3) ??Sn There are many well-defined metrics to measure ?n the distance 2between two permutations, such as ?(i)) , Spearman?s footrule df (?, ?) = Spearman?s rank correlation d (?, ?) = r i=1 (?(i) ?n ?? ?n |?(i) ? ?(i)|, and Kendall?s tau d (?, ?) = t i=1 j>i 1{?? ?1 (i)>?? ?1 (j)} , where 1{x} = 1 i=1 if x is true and 0 otherwise. One can (and sometimes should) choose different distances for different applications. In this regard, the Mallows model has rich expressiveness. Note that there are n! permutations in Sn . The computation of Z(?, ?) involves the sum of n! items. Although for some specific distances (such as dt ), there exist efficient ways for parameter estimation in the Mallows model, for many other distances (such as dr and df ), there is no known efficient method to compute Z(?, ?) and one has to pay for the high computational complexity of O(n!) [9]. This has greatly limited the application of the Mallows model in real problems. Usually, one has to employ sampling methods such as MCMC to reduce the complexity [12, 11]. This, however, will affect the effectiveness of the model. The Luce model is a stagewise probabilistic model on permutations. It assumes that there is a (hidden) score ?i , i = 1, . . . , n, for each individual object i. To generate a permutation ?, firstly exp(???1 (1) ) ?n ; secondly i=1 exp(?? ?1 (i) ) exp(???1 (2) ) ?n ; the assignment is i=2 exp(?? ?1 (i) ) the object ? ?1 (1) is assigned to position 1 with probability the object ? ?1 (2) is assigned to position 2 with probability continued until a complete permutation is formed. In this way, we obtain the permutation probability of ? as follows, n ? exp(???1 (i) ) ?n P (?) = . (4) j=i exp(?? ?1 (j) ) i=1 The computation of permutation probability in the Luce model is very efficient, as shown above. Actually the corresponding complexity is in the polynomial order of the number of objects. This is a clear advantage over the Mallows model. However, the Luce model is defined as a specific function of the scores of the objects, and therefore cannot make use of versatile permutation distances. As a result, its expressiveness is not as rich as the Mallows model, which may limit its applications. 3 A New Probabilistic Model As discussed in the above section, both the Mallows and the Luce model have certain advantages and limitations. In this section, we propose a new probabilistic model on permutations, which can inherit their advantages and avoid their limitations. We call this model the coset-permutation distance based stagewise (CPS) model. 3 3.1 The CPS Model As indicated by the name, the CPS model is defined on the basis of the so-called coset-permutation distance. A coset-permutation distance is induced from a permutation distance, as shown in the following definition. Definition 1. Given a permutation distance d, the coset-permutation distance d? from a coset Sn?k ? to a target permutation ? is defined as the average distance between the permutations in the coset and the target permutation: ? 1 ? n?k ?, ?) = d(S d(?, ?), (5) |Sn?k ?| ? ?Sn?k ? where |Sn?k ?| is the number of permutations in set Sn?k ?. It is easy to verify that if the permutation distance d is right invariant, then the induced cosetpermutation distance d? is also right invariant. With the concept of coset-permutation distance, given a dispersion parameter ? ? R and a location permutation ? ? Sn , we can define the CPS model as follows. Specifically, the generative process of a permutation ? of n objects is decomposed into n sequential stages. As an initialization, all the objects are placed in a working set. At the k-th stage, the task is to select the k-th object in the original permutation ? out of the working set. The probability of this selection is defined with the coset-permutation distance between the right coset Sn?k ? and the location permutation ?: ?n j=k ? n?k ?, ?)) exp(??d(S , ? n?k (?, k, j), ?)) exp(??d(S (6) where Sn?k (?, k, j) denotes the right coset including all the permutations that rank objects ? ?1 (1), . . . , ? ?1 (k ? 1), and ? ?1 (j) in the top k positions respectively. From Eq. (6), we can see that the closer the coset Sn?k ? is to the location permutation ?, the larger the selection probability is. Considering all the n stages, we will obtain the overall probability of generating ?, which is shown in the following definition. Definition 2. The CPS model defines the probability of a permutation ? conditioned on a dispersion parameter ? and a location permutation ? as: P (?|?, ?) = n ? k=1 ?n j=k ? n?k ?, ?)) exp(??d(S , ? n?k (?, k, j), ?)) exp(??d(S (7) where Sn?k (?, k, j) is defined in the sentence after Eq. (6). It is easy to verify that the probabilities P (?|?, ?), ? ? Sn defined in the CPS model naturally form ? a distribution over Sn . That is, for each ? ? Sn , we always have P (?|?, ?) ? 0, and P (?|?, ?) = 1. ??Sn In rank aggregation, one usually needs to combine multiple input rankings. To deal with this scenario, we further extend the CPS model, following the methodology used in [12]. P (?|?, ?) = n ? i=1 ?n e? j=i ?M m=1 e? ? n?i ?,?m ) ?m d(S ?M m=1 ? n?i (?,i,j),?m ) ?m d(S , (8) where ?= {?1 , . . . , ?M } and ?= {?1 , . . . , ?M }. The CPS model defined as above can be computed in a highly efficient manner, as discussed in the following subsection. 3.2 Computational Complexity According to the definition of the CPS model, at the k-th stage, one needs to compute (n ? k) coset-permutation distances. At first glance, the complexity of computing each coset-permutation 4 distance is about O((n ? k)!), since the coset contains this number of permutations. This is clearly intractable. The good news is that the real complexity for computing the coset-permutation distance induced by several popular permutation distances is much lower than O((n ? k)!). Actually, they can be as low as O(n2 ), according to the following theorem. Theorem 1. The coset-permutation distances induced from Spearman?s rank correlation dr , Spearman?s footrule df , and Kendall?s tau dt can all be computed with a complexity of O(n2 ). More specifically, for k = 1, 2, . . . , n ? 2, we have2 d?r (Sn?k ?, ?) = k ? (?(? ?1 (i)) ? i)2 + i=1 d?f (Sn?k ?, ?) = k ? n ? 1 n?k n ? (?(? ?1 (i)) ? j)2 , (9) i=k+1 j=k+1 |?(? ?1 (i)) ? i| + i=1 n ? 1 n?k n ? |?(? ?1 (i)) ? j|, (10) i=k+1 j=k+1 k n ? ? 1 1{?(??1 (i))>?(??1 (j))} . d?t (Sn?k ?, ?) = (n ? k)(n ? k ? 1) + 4 i=1 j=i+1 (11) According to the above theorem, each induced coset-permutation distance can be computed with a time complexity of O(n2 ). If we compute the CPS model according to Eq. (7), the time complexity will then be O(n4 ). This is clearly much more efficient than O((n ? k)!). Moreover, with careful implementations, the time complexity of O(n4 ) can be further reduced to O(n2 ), as indicated by the following theorem. Theorem 2. For the coset distances induced from dr , df and dt , the CPS model in Eq. (7) can be computed with a time complexity of O(n2 ). 3.3 Relationship with Previous Models The CPS model as defined above has strong connections with both the Luce model and the Mallows model, as shown below. The similarity between the CPS model and the Luce model is that they are both defined in a stagewise manner. This stagewise definition enables efficient inference for both models. The difference between the CPS model and the Luce model lies in that the CPS model has a much richer expressiveness than the Luce model. This is mainly because the CPS model is a distance based model while the Luce model is not. Our experiments in Section 5 show that different distances may be appropriate for different applications and datasets, which means a model with rich expressiveness has the potential to be applied for versatile applications. The similarity between the CPS model and the Mallows model is that they are both based on distances. Actually when the coset-permutation distance in the CPS model is induced by the Kendall?s tau dt , the CPS model is even mathematically equivalent to the Mallows model defined with dt . The major difference between the CPS model and the Mallows model lies in the computational efficiency. The CPS model can be computed efficiently with a polynomial time complexity, as discussed in the previous sub section. However, for most permutation distances, the complexity of the Mallows model is as huge as O(n!).3 According to the above discussions, we can see that the CPS model inherits the advantages of both the Luce model and the Mallows model, and avoids their limitations. 4 Algorithms for Rank Aggregation In this section, we show how to apply the extended CPS model to solve the problem of rank aggregation. Here we take meta search as an example, and consider the supervised case of rank aggregation. That is, given a set of training queries, we need to learn the parameters ? in the CPS model and apply the model with the learned parameters to aggregate rankings for new test queries. ? n?k ?, ?) = d(?, ?) for k = n ? 1, n. Note that d(S An exception is that for Kendall?s tau distance, the Mallows model can be as efficient as the CPS model because they are mathematically equivalent. 2 3 5 Algorithm 1 Sequential inference Input: parameters ?, input rankings ? Inference: 1: Initialize the set of n objects: D = {1, 2, . . . , n}. ? ? n?1 (< j >), ?m ). 2: ? ?1 (1) = arg minj?D m ?m d(S ?1 3: Remove object ? (1) from set D. 4: for k = 2 to n ( ) ? (4.1): ? ?1 (k) = arg minj?D m ?m d? Sn?k (< ? ?1 (1), . . . , ? ?1 (k ? 1), j >), ?m , ?1 (4.2): Remove object ? (k) from set D. 5: end Output: the final ranking ?. 4.1 Learning Let D = {(? (l) ,? (l) )} be the set of training queries, in which ? (l) is the ground truth ranking for query ql , and ? (l) is the set of M input rankings. In order to learn the parameters ? in Eq. (8), we employ maximum likelihood estimation. Specifically, the log likelihood of the training data for the CPS model can be written as below, L(?) = log ? P (? (l) |?, ? (l) ) = l = ? log P (? (l) |?, ? (l) ) l ? ? n ? M n ? ?M ?? ? ? (l) (l) ? (l) (l) ? ? d(S (? ,k,j),? ) m n?k ? n?k ? , ?m ) ? log m=1 m (12) ? ?m d(S e ? ? l k=1 m=1 j=k It is not difficult to prove that L(?) is concave with respect to ?. Therefore, we can use simple optimization techniques like gradient ascent to find the globally optimal ?. 4.2 Inference In the test phase, given a new query and its associated M input rankings, we need to infer a final ranking with the learned parameters ?. A straightforward method is to find the permutation with the largest probability conditioned on the M input rankings, just as the widely-used inference algorithm for the Mallows model [12]. We call the method global inference since it finds the globally most likely one from all possible permutations. The problem with global inference lies in that its complexity is as high as O(n!). As a consequence, it cannot handle applications with a large number of objects to rank. Considering the stagewise definition of the CPS model, we propose a sequential inference algorithm. The algorithm decomposes the inference into n steps. At the k-th step, we select the object j that can minimize the ? ? n?k (?? ?1 (1), . . . , ? ?1 (k ? 1), j?, ?m ), and put it at the k-th coset-permutation distance m ?m d(S position. The procedure is listed in Algorithm 1. In fact, sequential inference is an approximation of global inference, with a much lower complexity. Theorem 3 shows that the complexity of sequential inference is just O(M n2 ). Our experiments in the next section indicate that such an approximation does not hurt the ranking accuracy by much, while significantly speeds up the inference process. Theorem 3. For the coset distance induced from dr , df , and dt , the stagewise inference as shown in Algorithm 1 can be conducted with a time complexity of O(M n2 ) . 5 Experimental Results We have performed experiments to test the efficiency and effectiveness of the proposed CPS model. 6 5.1 Settings We take meta search as the target application, and use the LETOR [13] benchmark datasets in the experiments. LETOR is a public collection created for ranking research.4 There are two meta search datasets in LETOR, MQ2007-agg and MQ2008-agg. In addition to using them, we also composed a smaller dataset from MQ2008-agg, referred to as MQ2008-small, by selecting queries with no more than 8 documents from the MQ2008-agg dataset. This small dataset is used to perform detailed investigations on the CPS model and other baseline models. There are three levels of relevance labels in all the datasets: highly relevant, relevant, and irrelevant. We used NDCG [10] as the evaluation measure in our experiments. NDCG is a widely-used IR measure for multi-level relevance judgments. The larger the NDCG value, the better the aggregation accuracy. The 5-fold cross validation strategy was adopted for all the datasets. All the results reported in this section are the average results over the five folds. For the CPS model, we tested two inference methods: global inference (denoted as CPS-G) and sequential inference (denoted as CPS-S). For comparison, we implemented the Mallows model. When applied to supervised rank aggregation, the learning process of the Mallows model is also maximum likelihood estimation. For inference, we chose the permutation with the maximal probability as the final aggregated ranking. The time complexity of both learning and inference of the Mallows model with distance dr and df is O(n!). We also implemented an approximate algorithm as suggested by [12] using MCMC sampling to speed up the learning process. We refer to this approximate algorithm as MallApp. Note that the time complexity of the inference of MallApp is still O(n!) for distance dr and df . Furthermore, as a reference, we also tested a traditional method, BordaCount [1], which is based on majority voting. We did not compare with the Luce model because it is not straightforward to be applied to supervised rank aggregation, as far as we know. Note that Mallows, MallApp and CPS-G cannot handle the large datasets MQ2007-agg and MQ2008-agg, and were only tested on the small dataset MQ2008-small. 5.2 Results First, we report the results of these algorithms on the MQ2008-small dataset. The aggregation accuracies in terms of NDCG are listed in Table 1(a). Note that the accuracy of Mallows(dt ) is the same as that of CPS-G(dt ) because of the mathematical equivalence of the two models. Therefore, we omit Mallows(dt ) in the table. We did not implement the samplingbased learning algorithm for the Mallows model with distance dt , because in this case the learning algorithm has already been efficient enough. From the table, we have the following observations. ? For the Mallows model, exact learning is a little better than the approximate learning, especially for distance df . This is in accordance with our intuition. Sampling can improve the efficiency of the algorithm, but also miss some information contained in the original permutation probability space. ? For the CPS model, the sequential inference does not lead to much accuracy drop as compared to global inference. For distances df and dr , the CPS model outperforms the Mallows model. For example, when df is used, the CPS model wins the Mallows model by about 0.04 in terms of NDCG@2, which corresponds to a relative improvement of 10%. ? For the same model, with different distance functions, the performances differ significantly. This indicates that one should select the most suitable distance for a given application. ? All the probabilistic model based methods are better than BordaCount, the heuristic method. In addition to the comparison of aggregation accuracy, we have also logged the running time of each model. For example, on our test machine (with 2.13Ghz CPU and 4GB memory), it took about 12 4 The datasets can be downloaded from http://research.microsoft.com/?letor. 7 Table 1: Results (b) Results on MQ2008-agg and MQ2007-agg (a) Results on MQ2008-small NDCG BordaCount CPS-G(df ) CPS-S(df ) Mallows(df ) MallApp(df ) CPS-G(dr ) CPS-S(dr ) Mallows(dr ) MallApp(dr ) CPS-G(dt ) CPS-S(dt ) @2 0.335 0.392 0.389 0.350 0.343 0.387 0.388 0.333 0.343 0.414 0.419 @4 0.421 0.471 0.471 0.449 0.440 0.476 0.478 0.442 0.440 0.485 0.489 @6 0.479 0.518 0.517 0.490 0.491 0.519 0.519 0.491 0.490 0.530 0.534 @8 0.420 0.446 0.444 0.422 0.420 0.443 0.441 0.420 0.419 0.451 0.454 NDCG BordaCount CPS-S(dt ) CPS-S(dr ) CPS-S(df ) NDCG BordaCount CPS-S(dt ) CPS-S(dr ) CPS-S(df ) on MQ2008-agg @2 @4 0.281 0.343 0.312 0.379 0.314 0.376 0.276 0.352 on MQ2007-agg @2 @4 0.201 0.213 0.298 0.311 0.332 0.341 0.298 0.312 @6 0.389 0.420 0.419 0.399 @8 0.372 0.403 0.398 0.383 @6 0.225 0.322 0.352 0.323 @8 0.238 0.335 0.362 0.336 seconds for CPS-G(df ),5 30 seconds for MallApp(df ), and 12 hours for Mallows(df ) to finish the training process. The inference of the Mallows model based algorithms and the global inference of the CPS model based algorithms took more time than sequential inference of the CPS model, although the difference was not significant (this is mainly because n ? 8 for MQ2008-small). From these results, we can see that the proposed CPS model plus sequential inference is the most efficient one, and its accuracy is also very good as compared to other methods. Second, we report the results on MQ2008-agg and MQ2007-agg in Table 1(b). Note that the results of the Mallows model based algorithms and that of the CPS model with global inference are not available because of the high computational complexity for their learning or inference. The results show that the CPS model with sequential inference outperforms BordaCount, no matter which distance is used. Moreover, the CPS model with dt performs the best on MQ2008-agg, and the model with dr performs the best on MQ2007-agg. This indicates that we can achieve good ranking performance by choosing the most suitable distances for different datasets (and so applications). This provides a side evidence that it is beneficial for a probabilistic model on permutations to have rich expressiveness. To sum up, the experimental results indicate that the CPS model based learning and sequential inference algorithms can achieve state-of-the-art ranking accuracy and are more efficient than other algorithms. 6 Conclusions and Future Work In this paper, we have proposed a new probabilistic model, named the CPS model, on permutations for rank aggregation. The model is based on coset-permutation distance and defined in a stagewise manner. It inherits the advantages of the Luce model (high efficiency) and the Mallows model (rich expressiveness), and avoids their limitations. We have applied the model to supervised rank aggregation and investigated how to perform learning and inference. Experiments on public datasets demonstrate the effectiveness and efficiency of the CPS model. As future work, we plan to investigate the following issues. (1) We have shown that three induced coset-permutation distances can be computed efficiently. We will explore whether other distances also have such properties. (2) We have applied the CPS model to the supervised case of rank aggregation. We will study the unsupervised case. (3) We will investigate other applications of the model, and discuss how to select the most suitable distance for a given application. 5 The training process of CPS-G and CPS-S is exactly the same. 8 References [1] J. Aslam and M. Montague. Models for metasearch. In Proceedings of the 24th SIGIR, pages 276?284, 2001. [2] J. A. Aslam and M. Montague. Models for metasearch. In SIGIR ?01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 276?284, New York, NY, USA, 2001. ACM. [3] M. Beg. Parallel Rank Aggregation for the World Wide Web. World Wide Web. Kluwer Academic Publishers, 6(1):5?22, 2004. [4] D. Critchlow. Metric methods for analyzing partially ranked data. 1980. [5] H. Daniels. Rank correlation and population models. Journal of the Royal Statistical Society. Series B (Methodological), pages 171?191, 1950. [6] P. Diaconis. Group representations in probability and statistics. Institute of Mathematical Statistics Hayward, CA, 1988. [7] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the web. In WWW ?01: Proceedings of the 10th international conference on World Wide Web, pages 613?622, New York, NY, USA, 2001. ACM. [8] R. Fagin, R. Kumar, and D. Sivakumar. Efficient similarity search and classification via rank aggregation. In SIGMOD ?03: Proceedings of the 2003 ACM SIGMOD international conference on Management of data, pages 301?312, New York, NY, USA, 2003. ACM. [9] M. Fligner and J. Verducci. Distance based ranking models. Journal of the Royal Statistical Society. Series B (Methodological), 48(3):359?369, 1986. [10] J. Kalervo and K. Kek?al?ainen. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20(4):422?446, 2002. [11] A. Klementiev, D. Roth, and K. Small. Unsupervised rank aggregation with distance-based models. In Proceedings of the 25th ICML, pages 472?479, 2008. [12] G. Lebanon and J. Lafferty. Cranking: Combining rankings using conditional probability models on permutations. In ICML2002, pages 363?370, 2002. [13] T. Liu, J. Xu, T. Qin, W. Xiong, and H. Li. LETOR: Benchmark dataset for research on learning to rank for information retrieval. In SIGIR2007 Workshop on Learning to Rank for Information Retrieval, pages 3?10, 2007. [14] R. D. Luce. Individual Choice Behavior. Wiley, 1959. [15] C. L. Mallows. Non-null ranking models. Biometrika, 44:114?130, 1957. [16] R. Manmatha, T. Rath, and F. Feng. Modeling score distributions for combining the outputs of search engines. In SIGIR ?01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 267?275, New York, NY, USA, 2001. ACM. [17] M. Montague and J. A. Aslam. Relevance score normalization for metasearch. In CIKM ?01: Proceedings of the tenth international conference on Information and knowledge management, pages 427?433, New York, NY, USA, 2001. ACM. [18] R. L. Plackett. The analysis of permutations. Applied Statistics, 24(2):193?202, 1975. [19] L. Thurstone. A law of comparative judgment. Psychological review, 34(4):273?286, 1927. 9
3906 |@word polynomial:4 decomposition:2 versatile:4 liu:2 contains:1 score:10 selecting:1 series:2 daniel:1 manmatha:1 document:1 outperforms:2 rath:1 com:4 gmail:1 written:1 enables:1 remove:2 drop:1 ainen:1 generative:2 selected:2 item:1 samplingbased:1 provides:1 bijection:1 location:7 firstly:1 five:1 mathematical:2 ik:2 prove:1 naor:1 combine:2 manner:3 introduce:1 pairwise:1 behavior:1 multi:1 globally:2 decomposed:1 little:1 cpu:1 considering:2 notation:2 underlying:1 moreover:2 hayward:1 null:1 kind:1 fagin:1 voting:1 concave:1 tie:1 exactly:2 biometrika:1 klementiev:1 omit:1 accordance:1 limit:1 consequence:1 analyzing:1 sivakumar:2 ndcg:8 chose:1 plus:1 initialization:1 equivalence:1 limited:2 mallow:40 implement:1 procedure:1 yan:1 significantly:2 word:1 induce:2 get:1 cannot:4 selection:4 put:1 context:1 www:1 equivalent:2 demonstrated:1 roth:1 straightforward:2 sigir:5 continued:1 population:1 handle:2 thurstone:1 hurt:1 target:3 user:1 exact:1 us:1 tyliu:1 mq2007:6 ensures:1 news:1 intuition:1 complexity:26 efficiency:7 basis:1 easily:1 montague:3 effective:1 describe:1 query:8 aggregate:1 choosing:1 whose:3 heuristic:3 widely:3 solve:3 larger:2 richer:1 otherwise:1 statistic:4 itself:1 final:3 advantage:6 took:2 propose:4 product:1 maximal:1 qin:2 relevant:2 combining:3 achieve:4 academy:1 letor:5 generating:2 comparative:1 object:30 derive:2 illustrate:1 eq:5 strong:1 implemented:2 involves:1 indicate:2 beg:1 differ:1 public:4 investigation:1 secondly:1 mathematically:2 ground:1 exp:12 major:1 early:1 estimation:3 applicable:1 label:1 largest:1 clearly:3 always:1 aim:2 avoid:2 unassigned:1 derived:1 inherits:3 focus:1 improvement:1 methodological:2 rank:42 likelihood:3 mainly:3 indicates:2 greatly:1 baseline:2 sense:1 inference:33 plackett:1 hidden:1 critchlow:1 i1:2 tao:1 issue:2 among:1 overall:1 arg:2 denoted:3 classification:1 development:2 plan:1 art:3 initialize:1 sampling:3 unsupervised:4 icml:1 geng:1 future:2 report:2 employ:2 composed:1 diaconis:1 comprehensive:1 individual:3 phase:1 consisting:1 microsoft:5 huge:1 highly:3 investigate:2 dwork:1 evaluation:2 weakness:1 bracket:1 closer:1 psychological:1 modeling:1 assignment:1 subset:1 conducted:1 reported:1 international:5 probabilistic:16 management:2 choose:1 dr:14 li:1 syst:1 converted:1 potential:1 matter:1 ranking:28 performed:1 kendall:4 aslam:3 aggregation:36 parallel:1 minimize:1 formed:1 ir:2 accuracy:10 kek:1 efficiently:2 judgment:2 ofthe:1 minj:2 taoqin:1 definition:7 naturally:1 associated:2 gain:1 dataset:6 popular:5 subsection:2 knowledge:1 actually:3 dt:15 supervised:8 verducci:1 asia:2 methodology:1 furthermore:2 just:2 stage:10 correlation:3 until:1 working:2 web:4 glance:1 defines:4 stagewise:13 indicated:2 name:1 usa:5 concept:2 true:1 verify:2 former:1 assigned:5 symmetric:2 i2:2 deal:1 interchangeably:1 complete:1 demonstrate:1 performs:2 empirically:1 discussed:3 extend:1 kluwer:1 refer:3 composition:1 significant:1 similarity:3 recent:1 inf:1 irrelevant:1 scenario:1 certain:2 meta:4 additional:1 aggregated:2 multiple:3 infer:1 academic:1 cross:1 retrieval:4 metasearch:3 metric:2 df:19 sometimes:1 represent:1 normalization:1 cps:74 background:1 addition:2 median:1 publisher:1 ascent:1 induced:10 sent:1 lafferty:1 effectiveness:5 call:2 leverage:1 easy:2 concerned:1 enough:1 affect:1 finish:1 reduce:1 luce:22 whether:1 gb:1 york:5 clear:1 listed:2 detailed:1 reduced:1 generate:3 http:1 outperform:1 exist:1 cikm:1 write:1 group:4 key:1 tenth:1 sum:2 cranking:1 logged:1 named:1 pay:1 correspondence:1 fold:2 annual:2 speed:2 kumar:2 according:7 poor:1 spearman:4 smaller:1 beneficial:1 n4:2 invariant:2 discus:1 know:1 end:1 adopted:1 available:2 coset:34 apply:4 appropriate:1 xiong:1 alternative:1 original:2 top:2 assumes:1 denotes:1 running:1 sigmod:2 chinese:1 especially:1 society:2 feng:1 already:1 strategy:1 traditional:1 gradient:1 win:2 distance:65 majority:1 relationship:1 ql:1 difficult:1 abelian:1 potentially:1 implementation:1 perform:2 observation:1 dispersion:3 datasets:11 benchmark:2 extended:1 expressiveness:11 introduced:1 sentence:1 connection:1 engine:2 learned:2 subgroup:1 hour:1 trans:1 suggested:1 usually:3 below:2 including:1 tau:4 memory:1 royal:2 suitable:3 ranked:2 improve:1 footrule:2 created:1 sn:32 review:1 literature:1 relative:1 law:1 permutation:87 generation:1 limitation:7 validation:1 downloaded:1 placed:1 side:1 institute:1 wide:3 ghz:1 regard:1 world:3 avoids:3 rich:8 collection:2 far:1 lebanon:1 approximate:3 global:7 search:7 decomposes:3 table:5 learn:2 ca:1 ignoring:1 investigated:1 inherit:1 did:2 n2:7 xu:1 have2:1 referred:2 fligner:1 ny:5 wiley:1 sub:1 position:13 originated:1 lie:4 theorem:7 specific:2 evidence:1 intractable:2 workshop:1 sequential:13 cumulated:1 conditioned:2 simply:1 likely:1 explore:1 contained:1 agg:14 partially:1 corresponds:1 truth:1 acm:9 conditional:1 careful:1 specifically:3 miss:1 called:2 experimental:2 exception:1 select:4 latter:1 relevance:3 mcmc:2 tested:3
3,209
3,907
Efficient Optimization for Discriminative Latent Class Models Armand Joulin? INRIA 23, avenue d?Italie, 75214 Paris, France. Francis Bach? INRIA 23, avenue d?Italie, 75214 Paris, France. Jean Ponce? Ecole Normale Sup?erieure 45, rue d?Ulm 75005 Paris, France. [email protected] [email protected] [email protected] Abstract Dimensionality reduction is commonly used in the setting of multi-label supervised classification to control the learning capacity and to provide a meaningful representation of the data. We introduce a simple forward probabilistic model which is a multinomial extension of reduced rank regression, and show that this model provides a probabilistic interpretation of discriminative clustering methods with added benefits in terms of number of hyperparameters and optimization. While the expectation-maximization (EM) algorithm is commonly used to learn these probabilistic models, it usually leads to local maxima because it relies on a non-convex cost function. To avoid this problem, we introduce a local approximation of this cost function, which in turn leads to a quadratic non-convex optimization problem over a product of simplices. In order to maximize quadratic functions, we propose an efficient algorithm based on convex relaxations and lowrank representations of the data, capable of handling large-scale problems. Experiments on text document classification show that the new model outperforms other supervised dimensionality reduction methods, while simulations on unsupervised clustering show that our probabilistic formulation has better properties than existing discriminative clustering methods. 1 Introduction Latent representations of data are wide-spread tools in supervised and unsupervised learning. They are used to reduce the dimensionality of the data for two main reasons: on the one hand, they provide numerically efficient representations of the data; on the other hand, they may lead to better predictive performance. In supervised learning, latent models are often used in a generative way, e.g., through mixture models on the input variables only, which may not lead to increased predictive performance. This has led to numerous works on supervised dimension reduction (e.g., [1, 2]), where the final discriminative goal of prediction is taken explicitly into account during the learning process. In this context, various probabilistic models have been proposed, such as mixtures of experts [3] or discriminative restricted Boltzmann machines [4], where a layer of hidden variables is used between the inputs and the outputs of the supervised learning model. Parameters are usually estimated by expectation-maximization (EM), a method that is computationally efficient but whose cost function may have many local maxima in high dimensions. In this paper, we consider a simple discriminative latent class (DLC) model where inputs and outputs are independent given the latent representation.We make the following contributions: ? WILLOW project-team, Laboratoire d?Informatique de l?Ecole Normale Sup?erieure, (ENS/INRIA/CNRS UMR 8548). 1 ? We provide in Section 2 a quadratic (non convex) local approximation of the log-likelihood of our model based on the EM auxiliary function. This approximation is optimized to obtain robust initializations for the EM procedure. ? We propose in Section 3.3 a novel probabilistic interpretation of discriminative clustering with added benefits, such as fewer hyperparameters than previous approaches [5, 6, 7]. ? We design in Section 4 a low-rank optimization method for non-convex quadratic problems over a product of simplices. This method relies on a convex relaxation over completely positive matrices. ? We perform experiments on text documents in Section 5, where we show that our inference technique outperforms existing supervised dimension reduction and clustering methods. 2 Probabilistic discriminative latent class models We consider a set of N observations xn ? Rp , and their labels yn ? {1, . . . , M }, n ? {1, . . . , N }. We assume that each observation xn has a certain probability to be in one of K latent classes, modeled by introducing hidden variables zn ? {1, . . . , K}, and that these classes should be predictive of the label yn . We model directly the conditional probability of zn given the input data xn and the probability of the label yn given zn , while making the assumption that yn and xn are independent given zn (leading to the directed graphical model xn ? zn ? yn ). More precisely, we assume that, given xn , zn follows a multinomial logit model while, given zn , yn is a multinomial variable: T ewk xn +bk p(zn = k|xn ) = PK wT x +b j j n j=1 e and p(yn = m|zn = k) = ?km , (1) PM with wk ? Rp , bk ? R and m=1 ?km = 1. We use the notation w = (w1 , . . . , wK ), b = (b1 , . . . , bK ) and ? = (?km )1?k?K,1?m?M . Note that the model defined by (1) can be kernelized by replacing implicitly or explicitly x by the image ?(x) of a non linear mapping. Related models. The simple two-layer probabilistic model defined in Eq. (1), can be interpreted and compared to other methods in various ways. First, it is an instance of a mixture of experts [3] where each expert has a constant prediction. It has thus weaker predictive power than general mixtures of experts; however, it allows efficient optimization as shown in Section 4. It would be interesting to extend our optimization techniques to the case of experts with non-constant predictions. This is what is done in [8] where a convex relaxation of EM for a similar mixture of experts is considered. However, [8] considers the maximization with respect to hidden variables rather than their marginalization, which is essential in our setting to have a well-defined probabilistic model. Note also that in [8], the authors derive a convex relaxation of the softmax regression problems, while we derive a quadratic approximation. It is worth trying to combine the two approaches in future work. Another related model is a two-layer neural network. Indeed, if we marginalize the latent variable z, we get that the probability of y given x is a linear combination of softmax functions of linear functions of the input variables x. Thus, the only difference with a two-layer neural network with softmax functions for the last layer is the fact that our last layer considers linear parameterization in the mean parameters rather than in the natural parameters of the multinomial variable. This change allows us to provide a convexification of two-layer neural networks in Section 4. Among probabilistic models, a discriminative restricted Boltzmann machine (RBM) [4, 9] models p(y|z) as a softmax function of linear functions of z. Our model assumes instead that p(y|z) is linear in z. Again, this distinction between mean parameters and natural parameters allows us to derive a quadratic approximation of our cost function. It would of course be of interest to extend our optimization technique to the discriminative RBM. Finally, one may also see our model as a multinomial extension of reduced-rank regression (see, e.g. [10]), which is commonly used with Gaussian distributions and reduces to singular value decomposition in the maximum likelihood framework. 2 3 Inference We consider the negative conditional log-likelihood of yn given xn (regularized in w to avoid overfitting) where ? = (?, w, b) and ynm is equal to 1 if yn = m and 0 otherwise: ?(?) = ? 3.1 M N ? 1 XX kwk2F . ynm log p(ynm = 1|xn ) + N n=1 m=1 2K (2) Expectation-maximization A popular tool for solving maximum likelihood problems is the EM algorithm [10]. A traditional way of viewing EM is to add auxiliary variables and minimize the following upperbound of the negative log-likelihood ?, obtained by using the concavity of the logarithm: "K # T M N K X  X T 1 XX ynT ?k ewk xn +bk ? F (?, ?) = ? ? log kwk2F , ynm ?nk log ewk xn +bk + N n=1 m=1 ?nk 2K k=1 T k=1 M where ?k = (?k1 , . . . , ?km ) ? R and ? = (?1 , . . . , ?K )T ? RN ?K with ?n = (?n1 , . . . , ?nK ) ? RK . The EM algorithm can be viewed as a two-step block-coordinate descent procedure [11], where the first step (E-step) consists in finding the optimal auxiliary variables ?, given the parameters of the model ?. In our case, the result of this step is obtained in closed form T as ?nk ? ynT ?k ewk xn +bk with ?nT 1K = 1. The second step (M-step) consists of finding the best set of parameters ?, given the auxiliary variables ?. Optimizing the parameters ?k leads to the closed PN form updates ?k ? n=1 ?nk yn with ?kT 1M = 1 while optimizing jointly on w and b leads to a softmax regression problem, which we solved with Newton method. Since F (?, ?) is not jointly convex in ? and ?, this procedure stops when it reaches a local minimum, and its performance strongly depends on its initialization. We propose in the following section, a robust initialization for EM given our latent model, based on an approximation of the auxiliary cost function obtained with the M-step. 3.2 Initialization of EM Minimizing F w.r.t. ? leads to the original log-likelihood ?(?) depending on ? alone. Minimizing F w.r.t. ? gives a function of ? alone. In this section, we focus on deriving a quadratic approximation of this function, which will be minimized to obtain an initialization for EM. We consider second-order Taylor expansions around the value of ? corresponding to the uniformly 1 distributed latent variables zn , independent of the observations xn , i.e., ?0 = K 1N 1TK . This choice is motivated by the lack of a priori information on the latent classes. We briefly explain the calculation of the expansion of the terms depending on (w, b). For the rest of the calculation, see the supplementary material. Second-order Taylor expansion of the terms depending on (w, b). Assuming uniformly distributed variables zn and independence between zn and xn implies that wkT xn + bk = 0. TherePK fore, using the second-order expansion of the log-sum-exp function ?(u) = log( k=1 exp(uk )) around 0 leads to the following approximation of the terms depending on (w, b): i h1 1 K tr(?? T ) ? min k(K? ? Xw ? b)?K k2F + ?kwk2F + O(kXw + bk3 ) , Jwb (?) = cst + 2N 2K w,b N 1 1K 1TK is the usual centering projection matrix, and X = (x1 , . . . , xN )T . The where ?K = I ? K third-order term O(kXw + bk3F ) can be replaced by third-order terms in k? ? ?0 k, which makes the minimization with respect to w and b correspond to a multi-label classification problem with a square-loss [7, 10, 12]. Its solution may be obtained in closed form and leads to: i K h T Jwb (?) = cst + tr ?? I ? A(X, ?) + O(k? ? ?0 k3 ), 2N   where A(X, ?) = ?N I ? X(N ?I + X T ?N )?1 X T ?N . 3 Quadratic approximation. Omitting the terms that are independent of ? or of an order in ? higher than two, the second-order approximation Japp of the function obtained for the M-step is:  i K h Japp (?) = tr ?? T B(Y ) ? A(X, ?) , (3) 2   where B(Y ) = N1 Y (Y T Y )?1 Y T ? N1 1N 1TN and Y ? RN ?M is the matrix with entries ynm . Link with ridge regression. The first term, tr(?? T B(Y )), is a concave function in ?, whose maximum is obtained for ?? T = I (each variable in a different cluster). The second term, A(X, ?), is the matrix obtained in ridge regression [7, 10, 12]. Since A(x, ?) is a positive semi-definite matrix such that A(X, ?)1N = 0, the maximum of the second term is obtained for ?? T = 1N 1TN (all variables in the same cluster). Japp (?) is thus a combination of a term trying to put every point in the same cluster and a term trying to spread them equally. Note that in general, Japp is not convex. Non linear predictions. Using the matrix inversion lemma, A(X, ?) can be expressed in terms of the Gram matrix K = XX T , which allows us to use any positive definite kernel in our framework [12], and tackle problems that are not linearly separable. Moreover, the square loss gives a natural interpretation of the regularization parameter ? in terms of the implicit number of parameters of the learning procedure [10]. Indeed, the degree of freedom defined as df = n(1 ? trA) provides a intuitive method for setting the value of ? [7, 10]. Initialization of EM. We optimize Japp (?) to get a robust initialization for EM. Since the entries of each vector ?n sum to 1, we optimize Japp over a set of N simplices in K dimensions, S = {v ? RK | v ? 0, v T 1K = 1}. However, since this function is not convex, minimizing it directly leads to local minima. We propose, in Section 4, a general reformulation of any non-convex quadratic program over a set of N simplices and propose an efficient algorithm to optimize it. 3.3 Discriminative clustering The goal of clustering is to find a low-dimensional representation of unlabeled observations, by assigning them to K different classes, Xu et al. [5] proposes a discriminative clustering framework based on the SVM and [7] simplifies it by replacing the hinge loss function by the square loss, leading to ridge regression. By taking M = N and the labels Y = I, we obtain a formulation similar to [7] where we are looking for a latent representation that can recover the identity matrix. However, unlike [5, 7], our discriminative clustering framework is based on a probabilistic model which may allow natural extensions. Moreover, our formulation naturally avoids putting all variables in the same cluster, whereas [5, 7] need to introduce constraints on the size of each cluster. Also, our model leads to a soft assignment of the variables, allowing flexibility in the shape of the clusters, whereas [5, 7] is based on hard assignment. Finally, since our formulation is derived from EM, we obtain a natural rounding by applying the EM algorithm after the optimization whereas [7] uses a coarse k-means rounding. Comparisons with these algorithms can be found in Section 5. 4 Optimization of quadratic functions over simplices To initialize the EM algorithm, we must minimize the non-convex quadratic cost function defined by Eq. (3) over a product of N simplices. More precisely, we are interested in the following problems: min f (V ) = 12 tr (V V T B) s.t. V V = (V1 , . . . , VN )T ? RN ?K and ?n, Vn ? S, (4) where B can be any N ?N symmetric matrix. Denoting v = vec(V ) ? RN K the vector obtained by stacking all the columns of V and defining Q = (B T ?IK )T , where ? is the Kronecker product [13], the problem (4) is equivalent to: min v 1 2 v T Qv s.t. v ? RN K , v ? 0 and (IN ? 1TK )v = 1N . (5) Note that this formulation is general, and that Q could be any N K ? N K symmetric matrix. Traditional convex relaxation methods [14] would rewrite the objective function as v T Qv = tr(Qvv T ) = 4 tr(QT ) where T = vv T is a rank-one matrix which satisfies the set of constraints: ? T ? DN K = {T ? RN K?N K | T ? 0, T < 0} ? ? n, m ? {1, . . . , N }, 1TK Tnm 1K = 1, ? ? n, i, j ? {1, . . . , N }, Tni 1K = Tnj 1K . (6) (7) (8) We note F the set of matrix T verifying (7-8). With the unit-rank constraint, optimizing over v is exactly equivalent to optimizing over T . The problem is relaxed into a convex problem by removing the rank constraint, leading to a semidefinite programming problem (SDP) [15]. Relaxation. Optimizing T instead of v is computationally inefficient  since the running time complexity of general purpose SDP toolboxes is in this case O (KN )7 . On the other hand, for problems without pointwise positivity, [16, 17] have considered low-rank representations of matrices T , of the form T = V V T where V has more than one column. In particular, [17] shows that the non convex optimization with respect to V leads to the global optimum of the convex problem in T . In order to apply the same technique here, we need to deal with the pointwise nonnegativity. This can be done by considering the set of completely positive matrices, i.e., CP K = {T ? RN K?N K |?R ? N? , ?V ? RN K?R , V ? 0, T = V V T }. This set is strictly included in the set DN K of doubly non-negative matrices (i.e., both pointwise nonnegative and positive semi-definite). For R ? 5, it turns out that the intersection of CP K and F is the convex hull of the matrices vv T such that v is an element of the product of simplices [16]. This implies that the convex optimization problem of minimizing tr (QT ) over CP K ? F is equivalent to our original problem (for which no polynomial-time algorithm is known). However, even if the set CP K ? F is convex, optimizing over it is computationally inefficient [18]. We thus follow [17] and consider the problem through the low-rank pointwise nonnegative matrix V ? RN K?R instead of through matrices T = V V T . Note that following arguments from [16], if R is large enough, there are no local minima. However, because of the positivity constraint one cannot find in polynomial time a local minimum of a differentiable function. Nevertheless, any gradient descent algorithm will converge to a stationary point. In Section 5, we compare results with R > 1 than with R = 1, which corresponds to a gradient descent directly on the simplex. Problem reformulation. In order to derive a local descent algorithm, we reformulate the constraints (7-8) in terms of V (details can be found in the supplementary material). Denoting by Vr the r-th column of V , Vrn the K-vector such as Vr = (Vr1 , . . . , VrN )T and V n = (V1n , . . . , VRn ), condition (8) is equivalent to kVrm k1 = kVrn k1 for all n and m. Substituting this in (7) yields that PR for all n, kV n k2?1 = 1, where kV n k22?1 = r=1 (1T Vrn )2 is the squared ?2?1 norm. We drop this condition by using a rescaled cost function which equivalent. Finally, using the notation D: D = {W ? RN K | W ? 0, ?n, m, kW n k1 = kW m k1 }, we obtain a new equivalent formulation: min V ?RN K?R , ?r, Vr ?D 1 2 tr(V D?1 V T Q) with D = Diag((IN ? 1K )T V V T (IN ? 1K )), (9) where Diag(A) is the matrix with the diagonal of A and 0 elsewhere. Since the set of constraints for V is convex, we can use a projected gradient method [19] with the projection step we now describe. Projection on D. Given N K-vectors Z n stacked in a N K vector Z = [Z 1 ; . . . ; Z N ], we consider the projection of Z on D. For a given positive real number a, the projection of Z on the set of all U ? D such that for all n, kU n k1 = a, is equivalent to N independent projections on the ?1 ball with radius a. Thus projecting Z on D is equivalent to find the solution of: min a?0 L(a) = N X n=1 max min 1 kU n ?n ?R U n ?0 2 ? Z n k22 + ?n (1TK U n ? a), where (?n )n?N are Lagrange multipliers. The problem of projecting each Z n on the ?1 ball of radius a is well studied [20], with known expressions for the optimal Lagrange multipliers, (?n (a))n?N and the corresponding projection for a given a. The function L(a) is 5 K=3 80 60 avg round min round 40 ind ? avg round ind ? min round 0 5 noise dimension K=5 100 classification rate (%) classification rate (%) classification rate (%) K=2 100 80 60 40 10 0 5 noise dimension 10 100 80 60 40 0 5 noise dimension 10 Figure 1: Comparison between our algorithm and R independent optimizations. Also comparison between two rounding: by summing and by taking the best column. Average results for K = 2, 3, 5 (Best seen in color). convex, piecewise-quadratic and differentiable, which yields the first-order optimality condiPN tion n=1 ?n (a) = 0 for a. Several algorithms can be used to find the optimal value of a. We PN use a binary search by looking at the sign of n=1 ?n (a) on the interval [0, ?max ], where ?max is found iteratively. This method was found to be empirically faster than gradient descent. Overall complexity and running time. We use projected gradient descent, the bottleneck of our algorithm is the projection with a complexity of O(RN 2 K log(K)). We present experiments on running times in the supplementary material. 5 Implementation and results We first compare our algorithm with others to optimize the problem (4). We show that the performances are equivalent but, our algorithm can scale up to larger database. We also consider the problem of supervised and unsupervised discriminative clustering. In both cases, we show that our algorithm outperforms existing methods. Implementation. For supervised and unsupervised multilabel classification, we first optimize the second-order approximation Japp , using the reformulation (9). We use a projected gradient descent method with Armijo?s rule along the projection arc for backtracking [19]. It is stopped after a maximum number of iterations (500) or if relative updates are too small (10?8 ). When the algorithm PR stops, the matrix V has rank greater than 1 and we use the heuristic v ? = r=1 Vr ? S as our final solution (?avg round?). We also compare this rounding with another heuristic obtained by taking v ? = argminVr f (Vr ) (?min round?). v ? is then used to initialize the EM algorithm described in Section 2. Optimization over simplices. We compare our optimization of the non-convex quadratic problem (9) in V , to the convex SDP in T = V V T on the set of constraints defined by T ? DN K , (7) and (8). To optimize the SDP, we use generic algorithms, CVX [21] and PPXA [22]. CVX uses interior points methods whereas PPXA uses proximal methods [22]. Both algorithms are computationally inefficient and do not scale well with either the number of points or the number of constraints. Thus we set N = 10 and K = 2 on discriminative clustering problems (which are described later in this section). We compare the performances of these algorithms after rounding. For the SDP, we take ? ? = T 1N K and for our algorithm we report performances obtained for both rounding discuss above (?avg round? and ?min round?). On these small examples, our algorithm associated with ?min round? reaches similar performances than the SDP, whereas, associated with ?avg round?, its performance drops. Study of rounding procedures. We compare the performances of the two different roundings, ?min round? and ?avg round? on discriminative clustering problems. After rounding, we apply the EM algorithm and look at the classification scores. We also compare our algorithm for a given R, to two baselines where we solve independently problem (4) R times and then apply the same roundings (?ind - min round? and ?ind - avg round?). Results are shown Figure 1. We consider three 6 Classification rate (%) 1 vs. 20 ? K = 3 1 vs. 20 ? K = 7 90 90 80 80 80 70 70 70 60 60 60 50 50 100 200 N 300 400 50 100 4 vs. 5 ? K = 3 Classification rate (%) 1 vs. 20 ? K = 15 90 200 N 300 400 100 4 vs. 5 ? K = 7 90 90 80 80 80 70 70 70 60 60 60 50 100 200 N 300 400 300 400 4 vs. 5 ? K = 15 90 50 200 N 50 100 200 N 300 400 100 200 N 300 400 Figure 2: Classification rate for several binary classification tasks (from top to bottom) and for different values of K, from left to right (Best seen in color). different problems, N = 100 and K = 2, K = 3 and K = 5. We look at the average performances as the number of noise dimensions increases in discriminative clustering problems. Our method outperforms the baseline whatever rounding we use. Figure 1 shows that on problems with a small number of latent classes (K < 5), we obtain better performances by taking the column associated with the lowest value of the cost function (?min round?), than summing all the columns (?avg round?). On the other hand, when dealing with a larger number of classes (K ? 5), the performance of ?min round? drops significantly while ?avg round? maintains good results. A potential 1 1N 1TK in expectation, thus explanation is that summing the columns of V gives a solution close to K in the region where our quadratic approximation is valid. Moreover, the best column of V is usually a local minimum of the quadratic approximation, which we have found to be close to similar local minima of our original problem, therefore, preventing the EM algorithm from converging to another solution. In all others experiments, we choose ?avg round?. Application to classification. We evaluate the optimization performance of our algorithm (DLC) on text classification tasks. For our experiments, we use the 20 Newsgroups dataset (http://people.csail.mit.edu/jrennie/), which contains postings to Usenet newsgroups. The postings are organized by content into 20 categories. We use the five binary classification tasks considered in [23, Chapter 4, page 91]. To set the regularization parameter ?, we use the degree of freedom df (see Section 3.2). Each document has 13312 entries and we take df = 1000. We use 50 random initializations for our algorithm. We compare our method with classifiers such as the linear SVM and the supervised Latent Dirichlet Allocation (sLDA) classifier of Blei et al. [2]. We also compare our results to those obtained by an SVM using the features obtained with dimensionreducing methods such as LDA [1] and PCA. For these models, we select parameters with 5-fold cross-validation. We also compare to the EM without our initialization (?rand-init?) but also with 50 random initializations, a local descent method which is close to back-propagation in a two-layer neural network, which in this case strongly suffers from local minima problems. An interesting result on computational time is that EM without our initialization needs more steps to obtain a local minimum. It is therefore slower than with our initialization in this particular set of experiments. We show some results in Figure 2 (others maybe found in the supplementary material) for different values of K and with an increasing number N of training samples. In the case of topic models, K represents the number of topics. Our method significantly outperforms all the other classifiers. The comparison with ?rand-init? shows the importance of our convex initialization. We also note that our performance increases slowly with K. Indeed, the number of latent classes needed to correctly separate two classes of text is small. Moreover, the algorithm tends to automatically select K. Empirically, we notice that starting with K = 15 classes, our average final number of active classes is around 3. This explains the relatively small gain in performance as K increases. 7 0.4 clustering error clustering error 0.8 0.3 0.2 0.1 0 0.6 0.4 0.2 0 5 10 15 20 noise dimension 5 10 15 20 noise dimension Figure 3: Clustering error when increasing the number of noise dimensions. We have take 50 different problems and 10 random initializations for each of them. K = 2, N = 100 and R = 5, on the left, and K = 5, N = 250 and R = 10, on the right(Best seen in color). Figure 4: Comparison between our method (left) and k-means (right). First, circles with RBF kernels. Second, linearly separable bumps. K = 2, N = 200 and R = 5 in both cases. Application to discriminative clustering. Figure 3 shows the optimization performance of the EM algorithm with 10 random starting points with (?DLC?) and without (?rand-init?) our initialization method. We compare their performances to K-means, Gaussian Mixture Model (?GMM?), Diffrac [7] and max-margin clustering (?MMC?) [24]. Following [7], we take linearly separable bumps in a two-dimensional space and add dimensions containing random independent Gaussian noise (e.g. ?noise dimensions?) to the data. We evaluate the ratio of misclassified observations over the total number of observations. For the first experiment, we fix K = 2, N = 100, and R = 5, and for the second K = 5, N = 250, and R = 10. The additional independent noise dimensions are normally distributed. We use linear kernels for all the methods. We set the regularization parameters ? to 10?2 for all experiments but we have seen that results do not change much as long as ? is not too small (> 10?8 ). Note that we do not show results for the MMC algorithm when K = 5 since this algorithm is specially designed for problems with K = 2. It would be interesting to compare to the extension for multi-class problems proposed by Zhang et al. [24]. On both examples, we are significantly better than Diffrac, k-means and MMC. We show in Figure 4 additional examples which are non linearly separable. 6 Conclusion We have presented a probabilistic model for supervised dimension reduction, together with associated optimization tools to improve upon EM. Application to text classification has shown that our model outperforms related ones and we have extended it to unsupervised situations, thus drawing new links between probabilistic models and discriminative clustering. The techniques presented in this paper could be extended in different directions: First, in terms of optimization, while the embedding of the problem to higher dimensions has empirically led to finding better local minima, sharp statements might be made to characterize the robustness of our approach. In terms of probabilistic models, such techniques should generalize to other latent variable models. Finally, some additional structure could be added to the problem to take into account more specific problems, such as multiple instance learning [25], multi-label learning or discriminative clustering for computer vision [26, 27]. Acknowledgments. This paper was partially supported by the Agence Nationale de la Recherche (MGA Project) and the European Research Council (SIERRA Project). We would like to thank Toby Dylan Hocking, for his help on the comparison with other methods for the classification task. 8 References [1] David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3, 2003. [2] David M. Blei and Jon D. Mcauliffe. Supervised topic models. In Advances in Neural Information Processing Systems (NIPS), 2007. [3] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixtures of local experts. Neural Computation, 3(1):79?87, 1991. [4] H. Larochelle and Y. Bengio. Classification using discriminative restricted boltzmann machines. In Proceedings of the international conference on Machine learning (ICML), 2008. [5] Linli Xu, James Neufeld, Bryce Larson, and Dale Schuurmans. Maximum margin clustering. In Advances in Neural Information Processing Systems (NIPS), 2004. [6] Linli Xu. Unsupervised and semi-supervised multi-class support vector machines. In AAAI, 2005. [7] F. Bach and Z. Harchaoui. Diffrac : a discriminative and flexible framework for clustering. In Advances in Neural Information Processing Systems (NIPS), 2007. [8] N. Quadrianto, T. Caetano, J. Lim, and D. Schuurmans. Convex relaxation of mixture regression with efficient algorithms. In Advances in Neural Information Processing Systems (NIPS), 2009. [9] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504, 2006. [10] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer-Verlag, 2001. [11] David R Hunter and Kenneth Lange. A tutorial on MM algorithms. The American Statistician, 58(1):30? 37, February 2004. [12] J Shawe-Taylor and N Cristianini. Kernel Methods for Pattern Analysis. Cambridge Univ Press, 2004. [13] Gene H. Golub and Charles F. Van Loan. Matrix computations. Johns Hopkins University Press, 3rd edition, October 1996. [14] Kurt Anstreicher and Samuel Burer. D.C. versus copositive bounds for standard QP. Journal of Global Optimization, 33(2):299?312, October 2005. [15] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2004. [16] Samuel Burer. Optimizing a polyhedral-semidefinite relaxation of completely positive programs. Mathematical Programming Computation, 2(1):1?19, March 2010. [17] M. Journ?ee, F. Bach, P.-A. Absil, and R. Sepulchre. Low-rank optimization for semidefinite convex problems. volume 20, pages 2327?2351. SIAM Journal on Optimization, 2010. [18] A. Berman and N. Shaked-Monderer. Completely Positive Matrices. World Scientific Publishing Company, 2003. [19] D. Bertsekas. Nonlinear programming. Athena Scientific, 1995. [20] P. Brucker. An O(n) algorithm for quadratic knapsack problems. In Journal of Optimization Theory and Applications, volume 134, pages 549?554, 1984. [21] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx, August 2010. [22] Patrick L. Combettes. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization, 53:475?504, 2004. [23] Simon Lacoste-Julien. Discriminative Machine Learning with Structure. PhD thesis, University of California, Berkeley, 2009. [24] Kai Zhang, Ivor W. Tsang, and James T. Kwok. Maximum margin clustering made practical. In Proceedings of the international conference on Machine learning (ICML), 2007. [25] Thomas G. Dietterich and Richard H. Lathrop. Solving the multiple-instance problem with axis-parallel rectangles. Artificial Intelligence, 89:31?71, 1997. [26] P. Felzenszwalb, D. Mcallester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2008. [27] A. Joulin, F. Bach, and J. Ponce. Discriminative clustering for image co-segmentation. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2010. 9
3907 |@word armand:2 briefly:1 inversion:1 polynomial:2 norm:1 version:1 logit:1 km:4 simulation:1 decomposition:1 jacob:1 tr:9 sepulchre:1 reduction:5 contains:1 score:1 ecole:2 document:3 denoting:2 kurt:1 outperforms:6 existing:3 com:1 nt:1 nowlan:1 assigning:1 must:1 john:2 shape:1 drop:3 designed:1 update:2 v:6 alone:2 generative:1 fewer:1 stationary:1 intelligence:1 parameterization:1 recherche:1 blei:3 provides:2 coarse:1 zhang:2 five:1 mathematical:1 dn:3 along:1 mga:1 ik:1 consists:2 doubly:1 combine:1 polyhedral:1 introduce:3 indeed:3 brucker:1 sdp:6 multi:5 salakhutdinov:1 automatically:1 company:1 considering:1 increasing:2 project:3 xx:3 notation:2 moreover:4 lowest:1 what:1 interpreted:1 finding:3 berkeley:1 every:1 concave:1 tackle:1 exactly:1 k2:1 classifier:3 uk:1 control:1 unit:1 grant:1 whatever:1 yn:10 normally:1 mcauliffe:1 bertsekas:1 positive:8 ramanan:1 local:16 tends:1 usenet:1 inria:5 might:1 umr:1 initialization:15 studied:1 co:1 averaged:1 directed:1 acknowledgment:1 practical:1 block:1 definite:3 procedure:5 significantly:3 projection:9 boyd:2 get:2 cannot:1 marginalize:1 unlabeled:1 interior:1 close:3 put:1 context:1 applying:1 shaked:1 operator:1 optimize:6 equivalent:9 starting:2 independently:1 convex:29 rule:1 deriving:1 vandenberghe:1 his:1 embedding:1 coordinate:1 ulm:1 programming:4 us:3 element:2 recognition:2 database:1 convexification:1 bottom:1 solved:1 verifying:1 tsang:1 region:1 caetano:1 rescaled:1 complexity:3 cristianini:1 multilabel:1 trained:1 solving:3 rewrite:1 predictive:4 upon:1 completely:4 vr1:1 various:2 chapter:1 stacked:1 univ:2 informatique:1 describe:1 artificial:1 jean:2 whose:2 supplementary:4 larger:2 heuristic:2 solve:1 slda:1 otherwise:1 drawing:1 kai:1 cvpr:2 jointly:2 final:3 differentiable:2 vrn:4 neufeld:1 propose:5 product:5 fr:3 flexibility:1 deformable:1 tni:1 intuitive:1 kv:2 cluster:6 optimum:1 sierra:1 tk:6 mmc:3 derive:4 depending:4 help:1 andrew:1 qt:2 lowrank:1 eq:2 auxiliary:5 implies:2 larochelle:1 berman:1 direction:1 radius:2 hull:1 viewing:1 mcallester:1 material:4 explains:1 fix:1 extension:4 strictly:1 mm:1 around:3 considered:3 exp:2 k3:1 mapping:1 bump:2 substituting:1 purpose:1 label:7 council:1 tool:3 qv:2 minimization:1 mit:1 gaussian:3 normale:2 rather:2 avoid:2 pn:2 derived:1 focus:1 ponce:3 rank:10 likelihood:6 absil:1 baseline:2 inference:2 cnrs:1 hidden:3 kernelized:1 journ:1 willow:1 france:3 interested:1 misclassified:1 overall:1 classification:18 among:1 flexible:1 priori:1 proposes:1 softmax:5 initialize:2 equal:1 ng:1 kw:2 represents:1 look:2 unsupervised:6 k2f:1 jon:1 icml:2 future:1 minimized:1 report:1 simplex:1 piecewise:1 richard:1 others:3 replaced:1 statistician:1 n1:3 friedman:1 freedom:2 interest:1 golub:1 mixture:8 semidefinite:3 ynm:5 kt:1 capable:1 taylor:3 logarithm:1 diffrac:3 circle:1 stopped:1 increased:1 instance:3 soft:1 column:8 zn:12 assignment:2 maximization:4 stacking:1 cost:8 introducing:1 entry:3 rounding:11 too:2 characterize:1 kn:1 proximal:1 international:2 siam:1 csail:1 probabilistic:14 michael:1 together:1 hopkins:1 w1:1 thesis:1 again:1 squared:1 aaai:1 containing:1 choose:1 slowly:1 positivity:2 expert:7 inefficient:3 leading:3 american:1 account:2 potential:1 upperbound:1 de:2 wk:2 tra:1 explicitly:2 depends:1 tion:1 h1:1 later:1 closed:3 francis:2 sup:2 recover:1 maintains:1 parallel:1 simon:1 contribution:1 minimize:2 square:3 ynt:2 correspond:1 yield:2 generalize:1 hunter:1 fore:1 worth:1 explain:1 reach:2 suffers:1 centering:1 james:2 naturally:1 associated:4 rbm:2 stop:2 gain:1 dataset:1 popular:1 color:3 lim:1 dimensionality:4 organized:1 segmentation:1 back:1 higher:2 supervised:13 follow:1 disciplined:1 rand:3 formulation:6 done:2 strongly:2 implicit:1 hand:4 replacing:2 nonlinear:1 multiscale:1 lack:1 propagation:1 lda:1 scientific:2 omitting:1 k22:2 dietterich:1 multiplier:2 regularization:3 symmetric:2 iteratively:1 deal:1 ind:4 round:19 during:1 larson:1 samuel:2 trying:3 ridge:3 tn:2 cp:4 image:2 novel:1 charles:1 multinomial:5 tnj:1 empirically:3 qp:1 volume:2 extend:2 interpretation:3 numerically:1 composition:1 cambridge:2 vec:1 rd:1 erieure:2 pm:1 inclusion:1 shawe:1 jrennie:1 add:2 patrick:1 agence:1 optimizing:7 certain:1 verlag:1 binary:3 seen:4 minimum:9 greater:1 relaxed:1 additional:3 dlc:3 converge:1 maximize:1 semi:3 multiple:2 harchaoui:1 reduces:1 faster:1 calculation:2 bach:5 cross:1 long:1 burer:2 equally:1 prediction:4 converging:1 regression:8 vision:3 expectation:4 df:3 iteration:1 kernel:4 whereas:5 interval:1 laboratoire:1 singular:1 rest:1 unlike:1 specially:1 wkt:1 lafferty:1 jordan:2 ee:1 bengio:1 enough:1 japp:7 newsgroups:2 marginalization:1 independence:1 hastie:1 reduce:1 simplifies:1 lange:1 avenue:2 bottleneck:1 motivated:1 expression:1 pca:1 linli:2 matlab:1 maybe:1 anstreicher:1 category:1 reduced:2 http:2 tutorial:1 notice:1 sign:1 estimated:1 correctly:1 tibshirani:1 putting:1 reformulation:3 nevertheless:1 gmm:1 kenneth:1 lacoste:1 rectangle:1 v1:1 relaxation:8 monotone:1 sum:2 vn:2 cvx:4 layer:8 bound:1 fold:1 quadratic:16 nonnegative:2 precisely:2 constraint:9 kronecker:1 software:1 argument:1 min:15 optimality:1 hocking:1 separable:4 relatively:1 combination:2 ball:2 march:1 nonexpansive:1 em:23 making:1 projecting:2 restricted:3 pr:2 taken:1 computationally:4 turn:2 discus:1 needed:1 apply:3 kwok:1 generic:1 robustness:1 slower:1 rp:2 knapsack:1 original:3 thomas:1 assumes:1 clustering:24 running:3 top:1 dirichlet:2 graphical:1 publishing:1 hinge:1 newton:1 xw:1 k1:6 february:1 objective:1 added:3 usual:1 traditional:2 diagonal:1 gradient:6 link:2 separate:1 thank:1 capacity:1 monderer:1 athena:1 topic:3 considers:2 reason:1 assuming:1 modeled:1 pointwise:4 reformulate:1 ratio:1 minimizing:4 italie:2 october:2 statement:1 negative:3 design:1 implementation:2 boltzmann:3 perform:1 allowing:1 observation:6 arc:1 descent:8 defining:1 extended:2 looking:2 team:1 situation:1 hinton:2 rn:12 sharp:1 august:1 kxw:2 bk:7 david:3 paris:3 toolbox:1 optimized:1 california:1 distinction:1 nip:4 usually:3 pattern:3 program:2 max:4 explanation:1 power:1 natural:5 regularized:1 improve:1 numerous:1 julien:1 axis:1 bryce:1 text:5 relative:1 loss:4 discriminatively:1 interesting:3 allocation:2 versus:1 validation:1 degree:2 course:1 elsewhere:1 supported:1 last:2 weaker:1 allow:1 vv:2 wide:1 taking:4 felzenszwalb:1 benefit:2 distributed:3 van:1 dimension:16 xn:17 world:1 gram:1 avoids:1 valid:1 concavity:1 preventing:1 forward:1 commonly:3 author:1 projected:3 avg:10 made:2 adaptive:1 dale:1 implicitly:1 gene:1 dealing:1 global:2 overfitting:1 active:1 b1:1 summing:3 discriminative:24 copositive:1 search:1 latent:17 learn:1 ku:2 robust:3 init:3 schuurmans:2 expansion:4 european:1 rue:1 diag:2 joulin:3 spread:2 main:1 pk:1 linearly:4 kwk2f:3 noise:10 hyperparameters:2 toby:1 edition:1 quadrianto:1 cvxr:1 x1:1 xu:3 en:2 simplices:8 vr:5 combettes:1 nonnegativity:1 dylan:1 third:2 posting:2 rk:2 removing:1 tnm:1 specific:1 svm:3 essential:1 importance:1 phd:1 margin:3 nk:5 nationale:1 intersection:1 led:2 backtracking:1 ivor:1 lagrange:2 expressed:1 qvv:1 partially:1 springer:1 corresponds:1 satisfies:1 relies:2 conditional:2 goal:2 viewed:1 identity:1 rbf:1 content:1 change:2 hard:1 cst:2 included:1 loan:1 uniformly:2 reducing:1 wt:1 lemma:1 total:1 lathrop:1 ewk:4 la:1 meaningful:1 select:2 people:1 support:1 armijo:1 evaluate:2 handling:1
3,210
3,908
Permutation Complexity Bound on Out-Sample Error Malik Magdon-Ismail Computer Science Department Rensselaer Ploytechnic Institute 110 8th Street, Troy, NY 12180, USA [email protected] Abstract We define a data dependent permutation complexity for a hypothesis set H, which is similar to a Rademacher complexity or maximum discrepancy. The permutation complexity is based (like the maximum discrepancy) on dependent sampling. We prove a uniform bound on the generalization error, as well as a concentration result which means that the permutation estimate can be efficiently estimated. 1 Introduction Assume a standard setting with data D = {(xi , yi )}ni=1 , where (xi , yi ) are sampled iid from the joint distribution p(x, y) on Rd ? {?1}. Let H = {h : Rd 7? {?1}} be a learning model which produces a hypothesis g ? H when given D (we use g for the hypothesis returned by the learning algorithm andP h for a generic hypothesis in H). We assume the 0-1 loss, so the in-sample error is ein (h) = n 1 1 i=1 (1 ? yi h(xi )). The out-sample error eout (h) = 2 E [(1 ? yh(x))]; the expectation is over 2n the joint distribution p(x, y). We wish to bound eout (g). To do so, we will bound |eout (h) ? ein (h)| uniformly over H for all distributions p(x, y); however, the bound itself will depend on the data, and hence the distribution. The classic distribution independent bound is the VC-bound (Vapnik and Chervonenkis, 1971); the hope is that by taking into account the data one can get a tighter bound. The data dependent permutation complexity1 for H is defined by: " # n 1X PH (n, D) = E? max y?i h(xi ) . h?H n i=1 Here, ? is a uniformly random permutation on {1, . . . , n}. PH (n, D) is an intuitively plausible measure of the complexity of a model, measuring its ability to correlate with a random permutation of the target values. The difficulty in analyzing PH is that {y?i } is an ordered random sample from y = [y1 , . . . , yn ], sampled without replacement; as such it is a dependent sampling from a data driven distribution. Analogously, we may define the bootstrap complexity, using the bootstrap distribution B on y, where each sample yiB is independent and uniformly random over y1 , . . . , yn : " # n 1X B BH (n, D) = EB max yi h(xi ) . h?H n i=1 When the average y-value y? = 0, the bootstrap complexity is exactly the Rademacher complexity (Bartlett and Mendelson, 2002; Fromont, 2007; K?aa? ri?ainen and Elomaa, 2003; Koltchinskii, 2001; Koltchinskii and Panchenko, 2000; Lozano, 2000; Lugosi and Nobel, 1999; Massart, 2000): " # n 1X RH (n, D) = Er max ri h(xi ) , h?H n i=1 1 For simplicity, we assume that H is closed under negation; generally, all the? results ?? complex? P hold with the ? . ities defined using absolute values, so for example PH (n, D) = E? maxh?H ? n1 n h(x ) y i ? i i=1 1 where r is a random vector of i.i.d. fair ?1?s. The maximum discrepancy complexity measure ?H (n, D) is similar Pnto the Rademacher complexity, with the expectation over r being restricted to those r satisfying i=1 ri = 0, # " n 1X ri yi h(xi ) . ?H (n, D) = Er max h?H n i=1 When y? = 0, the permutation complexity is the maximum discrepancy; the permutation complexity is to maximum discrepancy as the bootstrap complexity is to the Rademacher complexity. The permutation complexity maintains a little more information regarding the distribution. Indeed we prove a uniform bound very similar to the uniform bound obtained using the Rademacher complexity: Theorem 1 With probability at least 1 ? ?, for every h ? H, eout (h) ? ein (h) + PH (n, D) + 13 r 6 1 ln . 2n ? The probability in this theorem is with respect to the data distribution. The challenge in proving this theorem is to accomodate samples (y?i ) constructed according to the data, and in a dependent way. Using our same proof technique, one can also obtain a similar uniform bound with the bootstrap complexity, where the samples are independent, but according to the data. The proof starts with the standard ghost sample and symmetrization argument. We then need to handle the data dependent sampling in the complexity measure, and this is done by introducing a second ghost data set to govern the sampling. The crucial aspect about sampling according to a second ghost data set is that the samples are now independent of the data; this is acceptable, provided the two methods of sampling are close enough; this is what constitutes the meat of the proof given in Section 2.2. Pn For a given permutation ?, one can compute maxh?H n1 i=1 y?i h(xi ) using an empirical risk minimization; however, the computation of the expectation over permutations is an exponential task, which needless to say is not feasible. Fortunately, we can establish that the permutation complexity is concentrated around its expectation, which means that in principle a single permutation suffices to compute the permutation complexity. Let ? be a single random permutation. p Theorem 2 For an absolute constant c ? 6 + 2/ ln 2, with probability at least 1 ? ?, r n 3 1X 1 y?i h(xi ) + c ln . PH (n, D) ? sup 2n ? h?H n i=1 The probability here is with respect to random permutations (i.e., it holds for any data set). It is easy to show concentration for the bootstrap complexity about its expectation ? this follows from McDiarmid?s inequality because the samples are independent. The complication with the permutation complexity is that the samples are not independent. Nevertheless, we can show the concentration indirectly by first relating the two complexities for any data set, and then using the concentration of the bootstrap complexity (see Section 2.3). Empirical Results. For a single random permutation, with probability at least 1 ? ?, ! r n 1X 1 1 eout (h) ? ein (h) + sup y?i h(xi ) + O . ln n ? h?H n i=1 Asymptotically, one random permutation suffices; in practice, one should average over a few. Indeed, a permutation based validation estimate for model selection has been extensively tested (see Magdon-Ismail and Mertsalov (2010) for details); for classification, this permutation estimate is the permutation complexity after removing a bias term. It outperformed LOO-cross validation and the Rademacher complexity on real data. We restate those results here, comparing model selection using the permutation estimate versus using the Rademacher complexity (using real data sets from the UCI Machine Learning repository (Asuncion and Newman, 2007)). The performance metric is the regret when compared to oracle model selection on a held out set (lower regret is better). We considered two model selection tasks: choosing the number of leafs in a decision tree; and, selecting k in the k-nearest neighbor method. The results reported here are averaged over several (10,000 or more) random splits of the data into a training set and held out set. We define a learning episode as an empirical risk minimization on O(n) data points. 2 n Data Abalone Ionosphere M.Mass Parkinsons Pima Ind. Spambase Transfusion WDBC Diffusion 3,132 263 667 144 576 3,450 561 426 2,665 10 Learning Episodes Decision Trees k-NN Perm. Rad. Perm. Rad. 0.02 0.02 0.09 0.12 0.18 0.19 0.75 0.84 0.06 0.06 0.11 0.12 0.34 0.40 0.32 0.44 0.07 0.07 0.12 0.15 0.07 0.07 0.43 0.54 0.08 0.09 0.12 0.19 0.24 0.37 0.33 0.50 0.03 0.02 0.06 0.04 100 Learning Episodes Decision Trees k-NN Perm. Rad. Perm. Rad. 0.02 0.02 0.04 0.04 0.16 0.17 0.70 0.83 0.05 0.05 0.11 0.11 0.34 0.41 0.33 0.43 0.07 0.07 0.11 0.14 0.06 0.07 0.43 0.55 0.08 0.09 0.12 0.19 0.23 0.34 0.34 0.51 0.03 0.02 0.06 0.03 The permutation complexity appears to dominate most of the time (especially when n is small); and, when it fails to dominate, it is as good or only slightly worse than the Rademacher estimate. It is not surprising that as n increases, the performances of the various complexities converges. Asymptotically, one can deduce several relationships between them, for example the maximum discrepancy can be asymptotically bounded from above and below by the Rademacher complexity. Similarly, (see Lemma 5), the bootstrap and permutation complexities are equal, asymptotically. The small sample performance of the complexities as bounding tools is not easy to discern theoretically, which is where the empirics comes in. An intuition for why the permutation complexity performs relatively well is because it maintains more of the true data distribution. Indeed, the permutation method for validation was found to work well empirically, even in regression (Magdon-Ismail and Mertsalov, 2010); however, our permutation complexity bound only applies to classification. Open Questions. Can the permutation complexity bound be extended beyond classification to (for example) regression with bounded loss? The permutation complexity displays a bias for severely unbalanced data; can this bias be removed.PWe conjecture that it should be possible to get a better uniform bound in terms of E? [maxh?H n1 ni=1 (y?i ? y?)h(xi )]. 1.1 Related Work Out-sample error estimation has extensive coverage, both in the statistics and learning commuities. (i) Statistical methods try to estimate the out-sample error asymptotically in n, and give consistent estimates under certain model assumptions, for example: final prediction error (FPE) (Akaike, 1974); Generalized Cross Validation (GCV) (Craven and Wahba, 1979); or, covariance-type penalties (Efron, 2004; Wang and Shen, 2006). Statistical methods tend to work well when the model has been well specified. Such methods are not our primary focus. (ii) Sampling methods, such as leave-one-out cross validation (LOO-CV), try to estimate the outsample error directly. Cross validation is perhaps the most used validation method, dating as far back as 1931 (Larson, 1931; Wherry, 1931, 1951; Katzell, 1951; Cureton, 1951; Mosier, 1951; Stone, 1974). The permutation complexity uses a ?sampled? data set on which to compute the complexity; other than this superficial similarity, the estimates are inherently different. (iii) Bounds. The most celebrated uniform bound on generalization error is the distribution independent bound of Vapnik-Chervonenkis (VC-bound) (Vapnik and Chervonenkis, 1971). Since the VC-dimension may be hard to compute, empirical estimates have been suggested, (Vapnik et al., 1994). The VC-bound is optimal among distribution independent bounds; however, for a particular distribution, it could be sub-optimal. Several data dependent bounds have already been mentioned, which can typically be estimated in-sample via optimization: maximum discrepancy (Bartlett et al., 2002); Rademacher-style penalties (Bartlett and Mendelson, 2002; Fromont, 2007; K?aa? ri?ainen and Elomaa, 2003; Koltchinskii, 2001; Koltchinskii and Panchenko, 2000; Lozano, 2000; Lugosi and Nobel, 1999; Massart, 2000); margin based bounds, for example (Shawe-Taylor et al., 1998). Generalizations to Gaussian and symmetric, bounded variance r have also been suggested, (Bartlett and Mendelson, 2002; Fromont, 2007) . One main application of such bounds is that any such approximate estimate of the out-sample error (which satisfies some bound of the form of the permutation complexity bound) can be used for model selection, after adding a (small) penalty for the ?complex3 ity of model selection? (see Bartlett et al. (2002)). In practice, this penalty for the complexity of model selection is ignored (as in Bartlett et al. (2002)). (iv) Permutation Methods are not new to statistics (Good, 2005; Golland et al., 2005; Wiklund et al., 2007). Golland et al. (2005) show concentration for a permutation based test of significance for the improved performance of a more complex model, using the Rademacher complexity. We directly give a uniform bound for the out-sample error in terms of a permutation complexity, answering a question posed in (Golland et al., 2005) which asks whether there is a direct link between permutation statistics and generalization errors. Indeed, Magdon-Ismail and Mertsalov (2010) construct a permutation estimate for validation which they empirically test in both classification and regression problems. For classification, their estimate is related to the permutation complexity. Most relevant to this work are Rademacher penalties and the corresponding (sampling without replacement) maximum discrepancy. Bartlett et al. (2002) give a uniform bound using the maximum discrepancy which is in some sense a uniform bound based on a sampling without replacement (dependent sampling); however, the sampling distribution is fixed, independent of the data. It is illustrative to briefly sketch the derivation of the maximum discrepancy  bound. Adapting the proof in Bartlett et al. (2002) and ignoring terms which are O ( n1 ln ?1 )1/2 , with probability at least 1??: eout (h) ? (b) = (c) ? (d) ? (a) ein (h) + sup {eout (h) ? ein (h)} ? ein (h) + ED sup {eout (h) ? ein (h)} , h?H h?H ) ( n X 1 ? ? yi h(xi ) ? yi h(xi ) , ein (h) + ED sup ED? 2n i=1 h?H ) ( n 1 X ? ? yi h(xi ) ? yi h(xi ) , ein (h) + ED,D? max h?H 2n i=1 ? ? n/2 ? ?1 X yi h(xi ) ? yi? h(x?i ) , ein (h) + ED,D? max ? h?H ? n i=1 (e) = ein (h) + ED ?H (n, D) ? ein (h) + ?H (n, D), (a) follows from McDiarmid?s inequality because eout (h) ? ein (h) is stable to a single point perturbation for every h, hence the supremum is also stable; in (b) appears a ghost data set and (c) follows by convexity of the supremum; in (d), we break the sum into two equal parts, which adds the factor of two; finally, (e) follows again by McDiarmid?s inequality because ?H is stable to single point perturbations. The discrepancy automatically drops out from using the ghost sample; this does not happen with data dependent permutation sampling, which is where the difficulty lies. 2 Permutation Complexity Uniform Bound We now give the proof of Theorem 1. We will adapt the standard ghost sample approach in VC-type proofs and the symmetrization trick in (Gin?e and Zinn, 1984) which has greatly simplified VC-style proofs. In general, high probability results are with respect to the distribution over data sets. Our main bounding tool will be McDiarmid?s inequality: Q Lemma 1 (McDiarmid (1989)) Let Xi ? Ai be independent; suppose f : Ai 7? R satisfies i sup Q (x1 ,...,xn )? z?Aj i Ai |f (x) ? f (x1 , . . . , xj?1 , z, xj+1 , . . . , xn )| ? cj , for j = 1, . . . , n. Then, with probability at least 1 ? ?, v u n u1 X 1 f (X1 , . . . , Xn ) ? Ef (X1 , . . . , Xn ) + t c2 ln . 2 i=1 i ? We also obtain Ef ? f + q P n 1 2 2 i=1 ci ln ?1 by using ?f in McDiarmid?s inequality. 4 2.1 Permutation Complexity The out-sample permutation complexity of a model is: " PH (n) = ED PH (n, D) = ED,? # n 1X max y?i h(xi ) , h?H n i=1 where the expectation is over the data D = (x1 , y1 ), . . . , (xn , yn ) and a random permutation ?. Let D? differ from D only in one example, (xj , yj ) ? (x?j , yj? ). Lemma 2 |PH (n, D) ? PH (n, D? )| ? 4 n. P Proof: For any permutation ? and every h ? H, the sum ni=1 y?i h(xi ) changes by at most 4 in ? going from D to D ; thus, the maximum over h ? H changes by at most 4. Lemma 2 together with McDiarmid?s inequality implies a concentration of PH (n, D) about PH (n), which means we can work with PH (n, D) instead of the unknown PH (n). r 1 1 Corollary 1 With probability at least 1 ? ?, PH (n) ? PH (n, D) + 4 ln . 2n ? Pn Since ein (h) = 21 (1 ? n1 i=1 yi h(xi )), the empirical risk minimizer g ? on the permuted targets ? y can be used to compute PH (n, D) for a particular permutation ?. 2.2 Bounding the Out-Sample Error To bound suph?H {eout (h) ? ein (h)}, we first use the standard ghost sample and symmetrization arguments typical of modern generalization error proofs (see for example Bartlett and Mendelson (2002); Shawe-Taylor and Cristianini (2004)). Let r?? = [r1?? , . . . , rn?? ] be a ?1 sequence. Lemma 3 With probability at least 1 ? ?: " sup {eout (h) ? ein (h)} h?H ? ED,D? sup h?H ( )# r n 1 1 X ?? 1 ? ? ri (yi h(xi ) ? yi h(xi )) + ln . 2n i=1 2n ? Proof: We proceed as in the proof of the maximum discrepancy bound in Section 1.1: " ( )# r n (a) 1 X 1 1 ? ? yi h(xi ) ? yi h(xi ) + ln , sup {eout (h) ? ein (h)} ? ED,D? sup 2n 2n ? h?H h?H i=1 )# " ( r n 1 1 X ?? 1 (b) ri (yi h(xi ) ? yi? h(x?i )) + = ED,D? sup ln . 2n i=1 2n ? h?H In (a), the O(( n1 ln 1? )1/2 ) term is from applying McDiarmid?s inequality because ein (h) changes by at most n1 if one data point changes, and so the supremum changes by at most that much; (b) follows because ri?? = ?1 corresponds to exchanging xi , x?i in the expectation which does not change the expectation (it amounts to relabeling of random variables). Lemma 3 holds for an arbitrary sequence r?? which is independent of D, D? ; we can take the expectation with respect to r?? , for arbitrarily distributed r?? , as long as r?? is independent of D, D? . 2.2.1 Generating Permutations with ?1 Sequences Fix y; for a given permutation ?, define a corresponding ?1 sequence r? by ri? = y?i yi ; then, y?i = ri? yi . Thus, given y, for each of the n! permutations ?1 , . . . , ?n! , we have a corresponding ?1 sequence r?i ; we thus obtain a multiset of sequences Sy = {r?1 , . . . , r?n! } (there may be repetitions as two different permutations may result in the same sequence of ?1 values); we thus have a mapping from permutations to the ?1 sequences in Sy . If r, a random vector of ?1s, is 5 uniform on Sy , then r.y (componentwise product) is uniform over the permutations of y. We say that Sy generates the permutations on y. Similarly, we can define Sy? , the generator of permutations on y? . Unfortunately, Sy , Sy? depend on D, D? , and so we can?t take the expectation uniformly over (for example) r ? Sy . We can overcome this by introducing a second ghost sample D?? to ?approximately? generate the permutations for y, y? , ultimately allowing us to prove the main result. Theorem 3 With probability at least 1 ? 5?, sup {eout (h) ? ein (h)} ? PH (n) + 9 h?H r 1 1 ln , 2n ? We obtain Theorem 1 by combining Theorem 3 with Corollary 1. 2.2.2 Proof of Theorem 3 Let D?? be a second, independent ghost sample, and Sy?? the generator of permutations for y?? . In Lemma 3, take the expectation over r?? uniform on Sy?? . The first term on the RHS becomes # " n 1 X 1 X ?? ? ? r (?)(yi h(xi ) ? yi h(xi )) , (1) ED,D? ,D?? sup n! ? h?H 2n i=1 i where each permutation ? induces a particular sequence r?? (?) ? Sy?? (previously we used ri? which is now ri (?)). Consider the sequences r, r? corresponding to the permutations on y and y? . The next lemma will ultimately relate the expectation over permutations in the second ghost data set to the permutations over D, D? . Lemma 4 With probability at least 1 ? 2?, there is a one-to-one mapping from the sequences in Sy?? = {r?? (?)}? to Sy = {r(?)}? such that r n 1 X 8 1 ?? ?? (ri ? ri (r ))yi h(xi ) ? ln , 2n n ? i=1 for every r?? ? Sy?? and every h ? H (we write r(r?? ) to denote the sequence r ? Sy to which r?? is mapped). Similarly, there exists such a mapping from Sy?? to Sy? . The probability here is with respect to y, y? and y?? . This lemma says that the permutation generating sets Sy?? , Sy? , and Sy are essentially equivalent. Proof: We can (without loss of generality) reorder the points in D?? so that the first k ?? are +1, so y1?? = ? ? ? = yk???? = +1, and the remaining are ?1. Similarily, we can order the points in D so that the first k are +1, so y1 = ? ? ? = yk = +1. We now construct the mapping from Sy?? to Sy as follows. For a given permutation ?, we map r?? (?) ? Sy?? to r(?) ? Sy . This mapping is clearly bijective since every permutation corresponds uniquely to a sequence in Sy (and Sy?? ). ?? ?? ?? or yi 6= yi?? . Since y and y?? disagree y . If ri 6= ri?? , either y?i 6= y? Let ri = y?i yi and ri?? = y? i i i ?? on exactly |k ? k ?? | locations (and similarly for y? and y? ), the number of locations where r and r?? disagree is therefore at most 2|k ? k ?? |. Thus, for any r?? and any h ? H, n n 1 X 1 X ?? (ri?? ? ri (r?? ))yi h(xi ) ? |ri ? ri (r?? )| |yi h(xi )| 2n 2n i=1 i=1 n 1 X ?? 2|k ? k ?? | |ri ? ri (r?? )| ? . 2n i=1 n = We observe that Pn ?? ?? i=1 (yi ? yi ) = 2(k ? k ) and so, n n n 1 X 1 X 1 X ?? ?? ?? (ri ? ri (r ))yi h(xi ) ? (yi ? yi ) = zi , n n 2n i=1 yi?? . i=1 ?? i=1 where zi = yi ? Since y and y are identically distributed, zi are independent and zero mean. Pn We consider the function f (z1 , . . . , zn ) = n1 i=1 zi . Since zi ? {0, ?2}, if you change one of the 6 zi , f changes by at most n4 , and so the conditions hold to apply McDiarmid?s inequality to f . Thus, q1 Pn ln 1 . zi ? using the symmetry of zi , with probability at least 1 ? 2?, 8 n i=1 2n ? Given D, D? , D?? , assume the mappings which are known to exist by the previous lemma are r(r?? ) and r? (r?? ). We can rewrite the internal summand in the expression of Equation (1) using the equality ri?? (yi h(xi ) ? yi? h(x?i )) = (ri?? ? ri (r?? ) + ri (r?? ))yi h(xi ) ? (ri?? ? ri? (r?? ) + ri? (r?? ))yi? h(x?i ). Using Lemma 4, we can, with probability at least 1 ? 2?, bound the term which involves (ri?? ? ri (r?? )) in Equation (1); and, similarly, with probability at least 1 ? 2?, we bound the term involving (ri?? ? ri? (r?? )). Thus, with probability at least 1 ? 4?, the expression in Equation (1) is bounded by: # " r n 1 X 8 1 1 X ?? ? ?? ? ? ED,D? ,D?? (ri (r )yi h(xi ) ? ri (r )yi h(xi )) + 2 ln , sup n! ? h?H 2n i=1 n ? where r?? (?) cycles through the sequences in Sy?? . Since the mappings r(r?? ) and r? (r?? ) are one-toone, r(r?? ).y cycles through the permutations of y, and similarly for r? (r?? ).y? . Since H is closed under negation, we finally obtain the bound # # " " r n n 1 X 1 X 1 X 1 X ? 8 1 ? y? h(xi ) + ED? y h(xi ) + 2 ln ; ED sup sup n! ? h?H 2n i=1 i n! ? h?H 2n i=1 ?i n ? Using this in Lemma 3, with probability at least 1 ? 5?, sup {eout (h) ? ein (h)} ? PH (n) + 9 h?H r 1 1 ln . 2n ? Commentary. (i) The permutation complexity bound needs empirical risk minimization, which is notoriously hard; however, if the same algorithm is used for learning as well as computing P, we can view it as optimization over a constrained hypothesis set (this is especially so with regularization); the bounds now hold. (ii) The same proof technique can be used to get a bootstrap complexity bound; the result is similar. (iii) One could bound PH for VC function classes, showing that this data dependent bound is asymptotically no worse than a VC-type bound. Bounding permutation complexity on specific domains could follow the methods in Bartlett and Mendelson (2002). 2.3 Estimating PH (n, D) Using a Single Permutation We now prove Theorem 2, which Pstates that one can essentially estimate PH (n, D) (an average over all permutations) by suph?H n1 ni=1 y?i h(xi ), using just a single randomly selected permutation ?. Our proof is indirect: we will link PH to the bootstrap complexity BH . The bootstrap complexity is concentrated via an easy application of McDiarmid?s inequality, which will ultimately allow us to conclude that the permutation estimate is also concentrated. The bootstrap distribution B constructs a random sequence yB of n independent uniform samples from y1 , . . . , yn ; the key requirement is that yiB are independent samples. There are nn (not distinct) possible bootstrap sequences. 1 Lemma 5 |BH (n, D) ? PH (n, D)| ? ? . n Proof: Let k be the number of yi which are +1; we condition on ?, the number of +1 in the bootstrap sample. Suppose B|? samples uniformly among all sequences with ? entries being +1. # " n 1X B BH (n, D) = E? EB|? sup yi h(xi ) ? , h?H n i=1 The key observation is that we can generate all samples uniformly according to B|? by first generating a random permutation and then selecting randomly |k ? ?| +1?s (or ?1?s) to flip, so: # " # " n n 1X F 1X B yi h(xi ) ? = EF|k??| E? sup y?i h(xi ) . EB|? sup h?H n i=1 h?H n i=1 7 F differs from y?i in exactly |k ? ?| positions, (F denotes the flipping random process.) Since y? i n n n 2|k ? ?| 2|k ? ?| 1X 1X F 1X y?i h(xi ) ? y?i h(xi ) ? sup y?i h(xi ) + ? sup . n n h?H n i=1 h?H n i=1 h?H n i=1 sup Thus, 2 |BH (n, D) ? PH (n, D)| ? E? [|k ? ?|]. n p ? Since E? [|k ? ?|] ? Var[k ? ?] ? 12 n (because ? is binomial), the result follows. In addition to furthering our cause toward the proof of Theorem 2, Lemma 5 is interesting in its own right, because it says that permutation and bootstrap sampling are asymptotically similar. The nice thing about the bootstrap estimate is that the expectation is over independent y1B , . . . , ynB . Since the bootstrap complexity changes by at most n2 if you change one sample, by McDiarmid?s inequality, Lemma 6 For a random bootstrap sample B, with probability at least 1 ? ?, r n 1X B 1 1 BH (n, D) ? sup yi h(xi ) + 2 ln . n 2n ? h?H i=1 We now prove concentration for estimating PH (n, D). As in the proof of Lemma 5, generate yB in two steps. First generate ?, the number of +1?s in yB ; ? is binomial. Now, generate a random permutation y? , and flip (as appropriate) a randomly selected |k ? ?| entries, where k is the number of +1?s in y. If we apply McDiarmid?s inequality to the function which equals the number of +1?s, we immediately get that with probability at least 1 ? 2?, |? ? k| ? ( 21 n ln ?1 )1/2 . Thus, with probability at least 1 ? 2?, yB differs from y? in at most (2n ln ?1 )1/2 positions. Each flip changes the complexity by at most 2, hence, with probability at least 1 ? 2?, r n n 1 1X B 1X 1 sup yi h(xi ) ? sup y?i h(xi ) + 4 ln . 2n ? h?H n i=1 h?H n i=1 We conclude that for a random permutation ?, with probability at least 1 ? 3?, r n 1 1X 1 y?i h(xi ) + 6 ln . BH (n, D) ? sup 2n ? h?H n i=1 Now, combining with Lemma 5, we obtain Theorem 2 after a little algebra, because ? < 1. We have not only established that PH is concentrated, but we have also established a general connection between the permutation and bootstrap based estimates. In this particular case, we see that sampling with and without replacement are very closely related. In practice, sampling without replacement can be very different, because one is never in the truly asymptotic regime. Along that vein, even though we have concentration, it pays to take the average over a few permutations. References Akaike, H. (1974). A new look at the statistical model identification. IEEE Trans. Aut. Cont., 19, 716?723. Asuncion, A. and Newman, D. (2007). UCI machine learning repository. Bartlett, P. L. and Mendelson, S. (2002). Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3, 463?482. Bartlett, P. L., Boucheron, S., and Lugosi, G. (2002). Model selection and error estimation. Machine Learning, 48, 85?113. Craven, P. and Wahba, G. (1979). Smoothing noisy data with spline functions. Numerische Mathematik, 31, 377?403. Cureton, E. E. (1951). Symposium: The need and means of cross-validation: II approximate linear restraints and best predictor weights. Education and Psychology Measurement, 11, 12?15. 8 Efron, B. (2004). The estimation of prediction error: Covariance penalties and cross-validation. Journal of the American Statistical Association, 99(467), 619?632. Fromont, M. (2007). Model selection by bootstrap penalization for classification. Machine Learning, 66(2-3), 165?207. Gin?e, E. and Zinn, J. (1984). Some limit theorems for empirical processes. Annals of Prob., 12, 929?989. Golland, P., Liang, F., Mukherjee, S., and Panchenko, D. (2005). Permutation tests for classification. Learning Theory, pages 501?515. Good, P. (2005). Permutation, parametric, and bootstrap tests of hypotheses. Springer. K?aa? ri?ainen, M. and Elomaa, T. (2003). Rademacher penalization over decision tree prunings. In In Proc. 14th European Conference on Machine Learning, pages 193?204. Katzell, R. A. (1951). Symposium: The need and means of cross-validation: III cross validation of item analyses. Education and Psychology Measurement, 11, 16?22. Koltchinskii, V. (2001). Rademacher penalties and structural risk minimization. IEEE Transactions on Information Theory, 47(5), 1902?1914. Koltchinskii, V. and Panchenko, D. (2000). Rademacher processes and bounding the risk of function learning. In E. Gine, D. Mason, and J. Wellner, editors, High Dimensional Prob. II, volume 47, pages 443?459. Larson, S. C. (1931). The shrinkage of the coefficient of multiple correlation. Journal of Education Psychology, 22, 45?55. Lozano, F. (2000). Model selection using Rademacher penalization. In Proc. 2nd ICSC Symp. on Neural Comp. Lugosi, G. and Nobel, A. (1999). Adaptive model selection using empirical complexities. Annals of Statistics, 27, 1830?1864. Magdon-Ismail, M. and Mertsalov, K. (2010). A permutation approach to validation. In Proc. 10th SIAM International Conference on Data Mining (SDM). Massart, P. (2000). Some applications of concentration inequalities to statistics. Annales de la Facult?e des Sciencies de Toulouse, X, 245?303. McDiarmid, C. (1989). On the method of bounded differences. In Surveys in Combinatorics, pages 148?188. Cambridge University Press. Mosier, C. I. (1951). Symposium: The need and means of cross-validation: I problem and designs of cross validation. Education and Psychology Measurement, 11, 5?11. Shawe-Taylor, J. and Cristianini, N. (2004). Kernel Methods for Pattern Analysis. Camb. Univ. Press. Shawe-Taylor, J., Bartlett, P. L., Williamson, R. C., and Anthony, M. (1998). Structural risk minimization over data dependent hierarchies. IEEE Transactions on Information Theory, 44, 1926? 1940. Stone, M. (1974). Cross validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, 36(2), 111?147. Vapnik, V. N. and Chervonenkis, A. (1971). On the uniform convergence of relative frequencies of events to their pr obabilities. Theory of Probability and its Applications, 16, 264?280. Vapnik, V. N., Levin, E., and Le Cun, Y. (1994). Measuring the VC-dimension of a learning machine. Neural Computation, 6(5), 851?876. Wang, J. and Shen, X. (2006). Estimation of generalization error: random and fixed inputs. Statistica Sinica, 16, 569?588. Wherry, R. J. (1931). A new formula for predicting the shrinkage of the multiple correlation coefficient. Annals of Mathematical Statistics, 2, 440?457. Wherry, R. J. (1951). Symposium: The need and means of cross-validation: III comparison of cross validation with statistical inference of betas and multiple r from a single sample. Education and Psychology Measurement, 11, 23?28. Wiklund, S., Nilsson, D., Eriksson, L., Sjostrom, M., Wold, S., and Faber, K. (2007). A randomization test for PLS component selection. Journal of Chemometrics, 21(10-11), 427?439. 9
3908 |@word repository:2 briefly:1 nd:1 open:1 covariance:2 q1:1 asks:1 celebrated:1 selecting:2 chervonenkis:4 spambase:1 comparing:1 surprising:1 rpi:1 happen:1 ainen:3 drop:1 leaf:1 selected:2 item:1 multiset:1 complication:1 location:2 mcdiarmid:13 mathematical:1 along:1 constructed:1 direct:1 c2:1 symposium:4 beta:1 prove:5 symp:1 theoretically:1 indeed:4 automatically:1 little:2 becomes:1 provided:1 estimating:2 bounded:5 mass:1 what:1 every:6 exactly:3 yn:4 limit:1 severely:1 analyzing:1 fpe:1 approximately:1 lugosi:4 eb:3 koltchinskii:6 averaged:1 yj:2 practice:3 regret:2 differs:2 bootstrap:21 faber:1 empirical:8 adapting:1 get:4 eriksson:1 close:1 selection:12 needle:1 bh:7 risk:8 applying:1 equivalent:1 map:1 survey:1 shen:2 numerische:1 simplicity:1 immediately:1 dominate:2 ity:1 classic:1 proving:1 handle:1 annals:3 target:2 suppose:2 hierarchy:1 akaike:2 us:1 hypothesis:6 trick:1 satisfying:1 mukherjee:1 vein:1 wang:2 cycle:2 episode:3 removed:1 yk:2 mentioned:1 intuition:1 panchenko:4 govern:1 complexity:55 convexity:1 cristianini:2 ultimately:3 depend:2 rewrite:1 algebra:1 joint:2 indirect:1 various:1 derivation:1 univ:1 distinct:1 newman:2 choosing:1 posed:1 plausible:1 say:4 ability:1 statistic:6 toulouse:1 itself:1 noisy:1 final:1 sequence:17 sdm:1 product:1 uci:2 relevant:1 combining:2 ismail:5 chemometrics:1 convergence:1 requirement:1 r1:1 rademacher:17 produce:1 generating:3 converges:1 leave:1 nearest:1 coverage:1 c:1 involves:1 come:1 implies:1 differ:1 restate:1 closely:1 vc:9 education:5 suffices:2 generalization:6 fix:1 randomization:1 tighter:1 hold:5 around:1 considered:1 mapping:7 estimation:4 proc:3 outperformed:1 symmetrization:3 repetition:1 tool:2 hope:1 minimization:5 clearly:1 gaussian:2 pn:5 parkinson:1 shrinkage:2 corollary:2 focus:1 greatly:1 sense:1 inference:1 dependent:11 nn:3 typically:1 going:1 classification:7 among:2 constrained:1 smoothing:1 equal:3 construct:3 never:1 validatory:1 sampling:15 look:1 constitutes:1 discrepancy:12 spline:1 summand:1 few:2 modern:1 randomly:3 relabeling:1 replacement:5 n1:9 negation:2 restraint:1 mining:1 truly:1 held:2 tree:4 iv:1 taylor:4 toone:1 measuring:2 zn:1 exchanging:1 similarily:1 introducing:2 entry:2 uniform:15 predictor:1 levin:1 gcv:1 loo:2 reported:1 international:1 siam:1 transfusion:1 analogously:1 together:1 again:1 worse:2 american:1 style:2 account:1 de:3 coefficient:2 combinatorics:1 icsc:1 try:2 break:1 closed:2 view:1 sup:27 start:1 maintains:2 asuncion:2 ni:4 variance:1 efficiently:1 sy:27 identification:1 iid:1 notoriously:1 comp:1 ed:15 frequency:1 proof:18 sampled:3 efron:2 cj:1 back:1 appears:2 follow:1 improved:1 yb:4 done:1 though:1 wold:1 generality:1 just:1 correlation:2 sketch:1 assessment:1 aj:1 perhaps:1 usa:1 true:1 lozano:3 hence:3 equality:1 regularization:1 symmetric:1 boucheron:1 pwe:1 ind:1 uniquely:1 illustrative:1 larson:2 abalone:1 generalized:1 stone:2 bijective:1 performs:1 ef:3 permuted:1 camb:1 empirically:2 volume:1 association:1 relating:1 measurement:4 cambridge:1 cv:1 ai:3 rd:2 similarly:6 shawe:4 stable:3 fromont:4 similarity:1 maxh:3 deduce:1 add:1 own:1 driven:1 certain:1 inequality:12 arbitrarily:1 yi:46 fortunately:1 commentary:1 aut:1 ii:4 multiple:3 adapt:1 cross:13 long:1 prediction:3 involving:1 regression:3 essentially:2 expectation:13 metric:1 gine:1 kernel:1 golland:4 addition:1 sjostrom:1 crucial:1 massart:3 tend:1 thing:1 structural:3 split:1 enough:1 easy:3 iii:4 identically:1 xj:3 zi:8 psychology:5 wahba:2 regarding:1 whether:1 expression:2 bartlett:13 wellner:1 penalty:7 returned:1 complexity1:1 proceed:1 cause:1 ignored:1 generally:1 amount:1 extensively:1 ph:27 concentrated:4 induces:1 generate:5 exist:1 estimated:2 write:1 key:2 nevertheless:1 diffusion:1 asymptotically:7 annales:1 sum:2 zinn:2 prob:2 you:2 discern:1 wiklund:2 decision:4 acceptable:1 bound:42 pay:1 display:1 oracle:1 ri:40 generates:1 aspect:1 u1:1 argument:2 relatively:1 conjecture:1 department:1 according:4 craven:2 slightly:1 cun:1 perm:4 n4:1 nilsson:1 intuitively:1 restricted:1 pr:1 ln:23 equation:3 previously:1 mathematik:1 flip:3 magdon:6 apply:2 observe:1 generic:1 indirectly:1 appropriate:1 denotes:1 remaining:1 binomial:2 especially:2 establish:1 society:1 malik:1 question:2 already:1 flipping:1 parametric:1 concentration:9 primary:1 gin:2 link:2 mapped:1 street:1 nobel:3 toward:1 cont:1 relationship:1 liang:1 sinica:1 unfortunately:1 pima:1 relate:1 troy:1 design:1 unknown:1 allowing:1 disagree:2 observation:1 extended:1 y1:6 rn:1 perturbation:2 arbitrary:1 specified:1 extensive:1 componentwise:1 z1:1 rad:4 connection:1 established:2 trans:1 beyond:1 andp:1 suggested:2 below:1 pattern:1 pnto:1 ghost:10 challenge:1 regime:1 max:7 royal:1 event:1 difficulty:2 predicting:1 dating:1 nice:1 asymptotic:1 relative:1 loss:3 permutation:79 interesting:1 suph:2 versus:1 var:1 generator:2 validation:17 penalization:3 consistent:1 principle:1 editor:1 bias:3 allow:1 institute:1 neighbor:1 taking:1 absolute:2 distributed:2 overcome:1 dimension:2 xn:5 adaptive:1 simplified:1 far:1 correlate:1 transaction:2 approximate:2 pruning:1 meat:1 supremum:3 ities:1 conclude:2 reorder:1 xi:49 rensselaer:1 facult:1 why:1 superficial:1 inherently:1 ignoring:1 symmetry:1 williamson:1 complex:2 european:1 anthony:1 domain:1 significance:1 main:3 statistica:1 rh:2 bounding:5 y1b:1 n2:1 fair:1 x1:5 ny:1 ein:21 fails:1 sub:1 yib:2 wish:1 position:2 exponential:1 lie:1 answering:1 yh:1 theorem:13 removing:1 formula:1 specific:1 showing:1 er:2 mason:1 ionosphere:1 mendelson:6 exists:1 vapnik:6 adding:1 ci:1 accomodate:1 margin:1 wdbc:1 elomaa:3 ordered:1 pls:1 applies:1 springer:1 aa:3 corresponds:2 minimizer:1 satisfies:2 eout:14 feasible:1 hard:2 change:11 typical:1 uniformly:6 lemma:18 la:1 internal:1 unbalanced:1 tested:1
3,211
3,909
Segmentation as Maximum-Weight Independent Set William Brendel and Sinisa Todorovic School of Electrical Engineering and Computer Science Oregon State University Corvallis, OR 97331 [email protected], [email protected] Abstract Given an ensemble of distinct, low-level segmentations of an image, our goal is to identify visually ?meaningful? segments in the ensemble. Knowledge about any specific objects and surfaces present in the image is not available. The selection of image regions occupied by objects is formalized as the maximum-weight independent set (MWIS) problem. MWIS is the heaviest subset of mutually non-adjacent nodes of an attributed graph. We construct such a graph from all segments in the ensemble. Then, MWIS selects maximally distinctive segments that together partition the image. A new MWIS algorithm is presented. The algorithm seeks a solution directly in the discrete domain, instead of relaxing MWIS to a continuous problem, as common in previous work. It iteratively finds a candidate discrete solution of the Taylor series expansion of the original MWIS objective function around the previous solution. The algorithm is shown to converge to an optimum. Our empirical evaluation on the benchmark Berkeley segmentation dataset shows that the new algorithm eliminates the need for hand-picking optimal input parameters of the state-of-the-art segmenters, and outperforms their best, manually optimized results. 1 Introduction This paper presents: (1) a new formulation of image segmentation as the maximum-weight independent set (MWIS) problem; and (2) a new algorithm for solving MWIS. Image segmentation is a fundamental problem, and an area of active research in computer vision and machine learning. It seeks to group image pixels into visually ?meaningful? segments, i.e., those segments that are occupied by objects and other surfaces occurring in the scene. The literature abounds with diverse formulations. For example, normalized-cut [1], and dominant set [2] formulate segmentation as a combinatorial optimization problem on a graph representing image pixels. ?Meaningful? segments may give rise to modes of the pixels? probability distribution [3], or minimize the Mumford-Shah energy [4]. Segmentation can also be done by: (i) integrating edge and region detection [5], (ii) learning to detect and close object boundaries [6, 7], and (iii) identifying segments which can be more easily described by their own parts than by other image parts [8, 9, 10]. From prior work, we draw the following two hypotheses. First, surfaces of real-world objects are typically made of a unique material, and thus their corresponding segments in the image are characterized by unique photometric properties, distinct from those of other regions. To capture this distinctiveness, it seems beneficial to use more expressive, mid-level image features (e.g., superpixels, regions) which will provide richer visual information for segmentation, rather than start from pixels. Second, it seems that none of a host of segmentation formulations are able to correctly delineate every object boundary present. However, an ensemble of distinct segmentations is likely to contain a subset of segments that provides accurate spatial support of object occurrences. Based on these two hypotheses, below, we present a new formulation of image segmentation. 1 Given an ensemble of segments, extracted from the image by a number of different low-level segmenters, our goal is to select those segments from the ensemble that are distinct, and together partition the image area. Suppose all segments from the ensemble are represented as nodes of a graph, where node weights capture the distinctiveness of corresponding segments, and graph edges connect nodes whose corresponding segments overlap in the image. Then, the selection of maximally distinctive and non-overlapping segments that will partition the image naturally lends itself to the maximum-weight independent set (MWIS) formulation. The MWIS problem is to find the heaviest subset of mutually non-adjacent nodes of an attributed graph. It is a well-researched combinatorial optimization problem that arises in many applications. It is known to be NP-hard, and hard to approximate [11]. Numerous heuristic approaches exist. For example, iterated tabu search [12] and branch-and-price [13] use a trial-and-error, greedy search in the space of possible solutions, with an optimistic complexity estimate of O(n3 ), where n is the number of nodes in the graph. The message passing [14] relaxes MWIS into a linear program (LP), and solves it using loopy belief propagation with no guarantees of convergence for general graphs; the ?tightness? of this relaxation holds only for bipartite graphs [15]. The semi-definite programming formulation of MWIS [16] provides an upper bound of the sum of weights of all independent nodes in MWIS. However, this is done by reformulating MWIS as a large LP of a new graph with n2 nodes, which is unsuitable for large-scale problems as ours. Finally, the replicator dynamics [17, 18] converts the original graph into its complement, and solves MWIS as a continuous relaxation of the maximum weight clique (MWC) problem. But in some domains, including ours, important hard constraints captured by edges of the original graph may be lost in this conversion. In this paper, we present a new MWIS algorithm, which represents a fixed-point iteration, guaranteed to converge to an optimum. It goes back and forth between the discrete and continuous domains. It visits a sequence of points {y (t) }t=1,2,... , defined in the continuous domain, y (t) ?[0, 1]n . Around each of these points, the algorithm tries to maximize the objective function of MWIS in the discrete domain. Each iteration consists of two steps. First, we use the Taylor expansion to approximate the objective function around y (t) . Maximization in the discrete domain of the approximation gives ? ? increases the original objective, then this a candidate discrete solution, x?{0, 1}n . Second, if x ? and the algorithm visits that point in the next iteration, candidate is taken as the current solution x, ? else, the algorithm visits the interpolation point, y (t+1) =y (t) +?(x?y ? (t) ), which can y (t+1) =x; be shown to be a local maximizer of the original objective for a suitably chosen ?. The algorithm always improves the objective, finally converging to a maximum. For non-convex objective functions, our method tends to pass either through or near discrete solutions, and the best discrete one x? encountered along the path is returned. Our algorithm has relatively low complexity, O(|E|), where, in our case, |E| ? n2 is the number of edges in the graph, and converges in only a few steps. Contributions: To the best of our knowledge, this paper presents the first formulation of image segmentation as MWIS. We derive a new MWIS algorithm that has low complexity, and prove that it converges to a maximum. Selecting segments from an ensemble so they cover the entire image and minimize a total energy has been used for supervised object segmentation [19]. They estimate ?good? segments by using classifiers of a pre-selected number of object classes. In contrast, our input, and our approach are genuinely low-level, i.e., agnostic about any particular objects in the image. Our MWIS algorithm has lower complexity, and is arguably easier to implement than the dual decomposition they use for energy minimization. Our segmentation outperforms the state of the art on the benchmark Berkeley segmentation dataset, and our MWIS algorithm runs faster and yields on average more accurate solutions on benchmark datasets than other existing MWIS algorithms. Overview: Our approach consists of the following steps (see Fig.1). Step 1: The image is segmented using a number of different, off-the-shelf, low-level segmenters, including meanshift [3], Ncuts [1], and gPb-OWT-UCM [7]. Since the right scale at which objects occur in the image is unknown, each of these segmentations is conducted at an exhaustive range of scales. Step 2: The resulting segments are represented as nodes of a graph whose edges connect only those segments that (partially) overlap in the image. A small overlap between two segments, relative to their area, may be ignored, for robustness. A weight is associated with each node capturing the distinctiveness of the corresponding segment from the others. Step 3: We find the MWIS of this graph. Step 4: The segments selected in the MWIS may not be able to cover the entire image, or may slightly overlap (holes and overlaps are marked red in Fig.1). The final segmentation is obtained by using standard morphological operators on region boundaries to eliminate these holes and overlaps. Note that there is no need for Step 4 if 2 (a) (b) (c) (d) Figure 1: Our main steps: (a) Input segments extracted at multiple scales by different segmentation algorithms; (b) Constructing a graph of all segments, and finding its MWIS (marked green); (c) Segments selected by our MWIS algorithm (red areas indicate overlaps and holes); (d) Final segmentation after region-boundary refinement (actual result using Meanshift and NCuts as input). the input low-level segmentation is strictly hierarchical, as gPb-OWT-UCM [7]. The same holds if we added the intersections of all input segments to the input ensemble, as in [19], because our MWIS algorithm will continue selecting non-overlapping segments until the entire image is covered. Paper Organization: Sec. 2 formulates MWIS, and presents our MWIS algorithm and its theoretical analysis. Sec. 3 formulates image segmentation as MWIS, and describes how to construct the segmentation graph. Sec. 4 and Sec. 5 present our experimental evaluation and conclusions. 2 MWIS Formulation and Our Algorithm Consider a graph G = (V, E, ?), where V and E are the sets of nodes and undirected edges, with cardinalities |V |=n and |E|, and ? : V ?R+ associates positive weights wi to every node i ? V , i=1, . . ., n. A subset of V can be represented by an indicator vector x=(xi )?{0, 1}n, where xi =1 means that i is in the subset, and xi =0 means that i is not in the subset. A subset x is called an independent set if no two nodes in the subset are connected by an edge, ?(i, j)?E : xi xj =0. We are interested in finding a maximum-weight independent set (MWIS), denoted as x? . MWIS can be naturally posed as the following integer program (IP): IP: x? = argmaxx wT x, (1) s.t. ?i ? V : xi ? {0, 1}, and ?(i, j)?E: xi xj = 0 P The non-adjacency constraint in (1) can be equivalently formalized as (i,j)?E xi xj =0. The latter expression can be written as a quadratic constraint, xT Ax=0, where A=(Aij ) is the adjacency matrix, with Aij =1 if (i, j)?E, and Aij =0 if (i, j)?E. / Consequently, IP can be reformulated as the following integer quadratic program (IQP): IQP: x? = argmaxx [wT x ? 12 ?xT Ax] s.t. ?i ? V : xi ? {0, 1} ???R (2) where there exists a positive regularization parameter ?>0 such that the problem on the implication in (2) holds. Next, we present our new algorithm for solving MWIS. x? = argmaxx wT x, s.t. ?i ? V : xi ? {0, 1}, xT Ax = 0 ? 2.1 The Algorithm As reviewed in Sec. 1, to solve IQP in (2), the integer constraint is usually either ignored, or relaxed to a continuous QP, e.g., by ?i?V : xi ?0 and kxk =1. For example, when ?1 norm is used as relaxation, the solution x? of (2) can be found using the replicator dynamics in the continuous domain [17]. Also, when only ?i?V : xi ?0 is used as relaxation, then the IP of (1) can be solved via message passing [14]. Usually, the solution found in the continuous domain is binarized to obtain a discrete solution. This may lead to errors, especially if the relaxed QP is nonconvex [20]. In this paper, we present a new MWIS algorithm that iteratively seeks a solution directly in the discrete domain. A discrete solution is computed by maximizing the first-order Taylor series approximation 3 of the quadratic objective in (2) around a solution found in the previous iteration. This is similar to the method of [20], which, however, makes the restrictive assumptions that the matrix of the quadratic term (analog of our A) is ?close? to positive-semi-definite (PSD), or that it is rank-1 with non-negative elements. These assumptions are not suitable for image segmentation. Graduated assignment [21] also iteratively maximizes a Taylor series expansion of a continuous QP around the previous solution; but this is done in the continuous domain. Since A in (2) is not PSD, our algorithm guarantees convergence only to a local maximum, as most state-of-the-art MWIS algorithms [12, 13, 14, 17, 18]. Below, we describe the main steps of our MWIS algorithm. Let f (x) = wT x ? 12 ?xT Ax denote the objective function of IQP in (2). Also, in our notation, ? x? ? {0, 1}n denote a point, candidate solution, and solution, respectively, in the discrete x, x, domain; and y ? [0, 1]n denotes a point in the continuous domain. Our algorithm is a fixed-point iteration that solves a sequence of integer programs which are convex approximations of f , around a solution found in the previous iteration. The key intuition is that the approximations are simpler functions than f , and thus facilitate computing the candidate discrete solutions in each iteration. The algorithm increases f in every iteration until convergence. Our algorithm visits a sequence of continuous points {y (1) , . . . , y (t) , . . . }, y (t) ? [0, 1]n , in itera? ? {0, 1}n in their respective neighbortions t = 1, 2, . . . , and finds discrete candidate solutions x hoods, until convergence. Each iteration t consists of two steps. First, for any point y ? [0, 1]n in the neighborhood of y (t) , we find the first-order Taylor series approximation of f (y) as T f (y) ? h(y, y (t) ) = f (y (t) ) + (y ? y (t) ) (w ? ?Ay (t) ) = y T (w ? ?Ay (t) ) + const, (3) where ?const? does not depend on y. Note that the approximation h(y, y (t) ) is convex in y, and simpler than f (y), which allows us to easily compute a discrete maximizer of h(?) as  1 , if ith element of (w ? ?Ay (t) )i ? 0 (t) ? = argmax h(x, y ) ? x ?i = x (4) 0 , otherwise. x?{0,1}n ? = 0 we instead set x ? = [0, . . . , 0, 1, 0, . . . , 0]T , with To avoid the trivial discrete solution, when x ? i = 1 where i is the index of the minimum element of (w ? ?Ay (t) ). x ? can be accepted as a new, valid discrete In the second step of iteration t, the algorithm verifies if x ? solution. This will be possible only if f is non-decreasing, i.e., if f (x)?f (y (t) ). In this case, the (t+1) ? (t) ? algorithm visits point y =x, in the next iteration. In case f (x)<f (y ), this means that there ? We estimate this local must be a local maximum of f in the neighborhood of points y (t) and x. ? (t) ). The maximizer of f in the continuous domain by linear interpolation, y (t+1) =y (t) +?(x?y (t+1) optimal value of the interpolation parameter ??[0, 1] is computed such that ?f (y )/?? ? 0, which ensures that f is non-decreasing in the next iteration. As shown in Sec. 2.2, the optimal ? has a closed-form solution: ! ! T ? ? y (t) ) (w ? ?Ay (t) ) (x ? = min max ,0 ,1 . (5) T ? ? y (t) ) A(x ? ? y (t) ) ?(x Having computed y (t+1) , the algorithm starts the next iteration by finding a Taylor series approxi? is taken mation in the neighborhood of point y (t+1) . After convergence, the latest discrete solution x ? Our MWIS algorithm is summarized in Alg. 1 to represent the final solution of MWIS, x? =x. 2.2 Theoretical Analysis This section presents the proof that our MWIS algorithm converges to a maximum. We also show that its complexity is O(|E|). We begin by stating a lemma that pertains to linear interpolation ? (t) ) such that the IQP objective function f is non-decreasing at y (t+1) . y (t+1) =y (t) +?(x?y Lemma 1 Suppose that the IQP objective function f is increasing at point y1 ? [0, 1]n , and decreasing at point y2 ? [0, 1]n , y1 6= y2 . Then, there exists a point, y = y1 + ?(y2 ? y1 ), and y ? [0, 1]n , such that f is increasing at y, where ? is an interpolation parameter, ? ? [0, 1]. Proof: It is straightforward to show that if ? ? [0, 1] ? y ? [0, 1]n . For ? = 0, we obtain y = y1 , where f is said to be increasing. For ? 6= 0, y can be found by estimating ? such 4  T T that ?f y1 +?(y2 ?y1 ) /???0. It follows: (w??Ay1 ) (y2 ?y1 )???(y2 ?y1 ) A(y2 ?y1 )?0. T T Define auxiliary terms c = (w ? ?Ay1 ) (y2 ? y1 ) and d = ?(y2 ? y1 ) A(y2 ? y1 ). Since A c c is not PSD, we obtain ? ? d , for d > 0, and ? ? d , for d < 0. Since ? ? [0, 1], we compute ?  ? = min(max( dc , 0), 1), which is equivalent to (5), for y1 = y (t) and y2 = x. In the following, we define the notion of maximum, and prove that Alg. 1 converges to a maximum. Definition We refer to point y ? as a maximum of a real, differentiable function g(y), defined over domain D, g : D ? R, if there exists a neighborhood of y ? , N (y ? ) ? D, such that ?y ? N (y ? ) : g(y ? ) ? g(y). Proposition 1 Alg. 1 increases f in every iteration, and converges to a maximum. ? ? f (y (t) ) then the next point visited by Alg. 1 is y (t+1) = x. ? Proof: In iteration t of Alg. 1, if f (x) (t+1) (t) (t) ? ? y ), yielding Thus, f increases in this case. Else, y = y + ?(x T T 1 ? (t) ) + ? 2 ?(x?y ? (t) ) A(x?y ? (t) ). f (y (t+1) )=f (y (t) )+?(w??Ay (t) ) (x?y 2 (6) T ? maximizes h, given by (3), we have h(x, ? y (t) )?h(y (t) , y (t) )=(w??Ay (t) ) (x?y ? (t) )?0. Since x Also, from Lemma 1, ? is non-negative. Consequently, the second term in (6) is non-negative. ReT T ? (t) ) A(x?y ? (t) )=(w??Ay (t) ) (x?y ? (t) ) garding the third term in (6), from (5) we have ??(x?y which we have already proved to be non-negative. Thus, f also increases in this second case. Since f ? wT 1, and f increases in every iteration, then f converges to a maximum.  Complexity: Alg. 1 has complexity O(|E|) per iteration. Complexity depends only on a few matrixvector multiplications with A, where each takes O(|E|). This is because A is sparse and binary, where each element Aij =1 iff (i, j) ? E. Thus, any computation in Alg. 1 pertaining to particular node i?V depends on the number of positive elements in ith row Ai? , i.e., on the branching factor ? in (4) has complexity O(n), where n < |E|, and thus does not affect the final of i. Computing x complexity. For the special case of balanced graphs, Alg. 1 has complexity O(|E|) = O(n log n). In our experiments, Alg. 1 converges in 5-10 iterations on graphs with about 300 nodes. 3 Formulating Segmentation as MWIS We formulate image segmentation as the MWIS of a graph of image regions obtained from different segmentations. Below, we explain how to construct this graph. Given a set of all segments, V , extracted from the image by a number of distinct segmenters, we construct a graph, G = (V, E, ?), where V and E are the sets of nodes and undirected edges, and ? : V ?R+ assigns positive weights wi to every node i ? V , i=1, . . ., n. Two nodes i and j are adjacent, (i, j) ? E, if their respective segments Si and Sj overlap in the image, Si ? Sj 6= ?. This can be conceptualized by the adjacency matrix A = (Aij ), where Aij = 1 iff Si ? Sj 6= ?, and Aij = 0 iff Si ? Sj = ?. For robustness in our experiments, we tolerate a relatively small amount of overlap by setting a tolerance threshold |Si ?Sj | ?, such that Aij = 1 if min(|S > ?, and Aij = 0 otherwise. (In our experiments we use i |,|Sj |) ? = 0.2). Note that the IQP in (2) also permits a ?soft? definition of A which is beyond our scope. The weights wi should be larger for more ?meaningful? segments Si , so that these segments are more likely included in the MWIS of G. Following the compositionality-based approaches of [8, 9], we define that a ?meaningful? segment can be easily described in terms of its own parts, but difficult to describe via other parts of the image. Note that this definition is suitable for identifying both: (i) distinct textures in the image, since texture can be defined as a spatial repetition of elementary 2D patterns; and (ii) homogeneous regions with smooth variations of brightness. To define wi , we use the formalism of [8], where the easiness and difficulty of describing Si is evaluated by its description length in terms of visual codewords. Specifically, given a dictionary of visual codewords, and the histogram of occurrence of the codewords in Si , we define wi = |Si |KL(Si , S?i ), where KL denotes the Kullback Leibler divergence, I is the input image, and S?i = I\Si . All the weights w are normalized by maxi wi . Below, we explain how to extract the dictionary of codewords. Similar to [22], we describe every pixel with an 11-dimensional descriptor vector consisting of the Lab colors and filter responses of the rotationally invariant, nonlinear MR8 filter bank, along with 5 the Laplacian of Gaussian filters. The pixel descriptors are then clustered using K-means (with K = 100). All pixels grouped within one cluster are labeled with a unique codeword id of that cluster. Then, the histogram of their occurrence in every region Si is estimated. Given G, as described in this section, we use our MWIS algorithm to select ?meaningful? segments, and thus partition the image. Note that the selected segments will optimally cover the entire image, otherwise any uncovered image areas will be immediately filled out by available segments in V that do not overlap with already selected ones, because this will increase the IQP objective function f . In the case when the input segments do not form a strict hierarchy and intersections of the input segments have not been added to V , we eliminate holes (or ?soft? overlaps) between the selected segments by applying the standard morphological operations (e.g., thinning and dilating of regions). 4 Results This section presents qualitative and quantitative evaluation of our segmentation on 200 images from the benchmark Berkeley segmentation dataset (BSD) [23]. BSD images are challenging for segmentation, because they contain complex layouts of distinct textures (e.g., boundaries of several regions meet at one point), thin and elongated shapes, and relatively large illumination changes. We also evaluate the generality and execution time of our MWIS algorithm on a synthetic graph from benchmark OR-Library [24], and the problem sets from [12]. Our MWIS algorithm is evaluated for the following three types of input segmentations. The first type is a hierarchy of segments produced by the gPb-OWT-UCM method of [7]. gPb-OWT-UCM uses the perceptual significance of a region boundary, Pb ? [0, 100], as an input parameter. To obtain the hierarchy, we vary Pb = 20:5:70. The second type is a hierarchy of segments produced by the multiscale algorithm of [5]. This method uses pixel-intensity contrast, ? ? [0, 255], as an input parameter. To obtain the hierarchy, we vary ? = 30:20:120. Finally, the third type is a union of NCut [1] and Meanshift [3] segments. Ncut uses one input parameter ? namely, the total number of regions, N , in the image. Meanshift uses three input parameters: feature bandwidth bf , spatial bandwidth bs , and minimum region area Smin . We vary these parameters as N = 10:10:100, bf = 5.5:0.5:8.5, bs = 4:2:10, and Smin = 100:200:900. The variants [7]+Ours and [5]+Ours serve to test whether our approach is capable of extracting ?meaningful? regions from a multiscale segmentation. The variant ([3]+[1])+Ours evaluates our hypothesis that reasoning over an ensemble of distinct segmentations improves each individual one. Segmentation of BSD images is used for a comparison with replicator dynamics approach of [17], which transforms the MWIS problem into the maximum weight clique problem, and then relaxes it into a continuous problem, denoted as MWC. In addition, we also use data from other domains ? specifically, OR-Library [24] and the problem sets from [12] ? for a comparison with other state-ofthe-art MWIS algorithms. Qualitative evaluation: Fig. 3 and Fig. 4 show the performance of our variant [7]+Ours on example images from BSD. Fig. 4 also shows the best segmentations of [7] and [25], obtained by an exhaustive search for the optimal values of their input parameters. As can be seen in Fig. 4, the method of [7] misses to segment the grass under the tiger, and oversegments the starfish and the camel, which we correct. Our approach eliminates the need for hand-picking the optimal input parameters in [7], and yields results that are good even in cases when objects have complex textures (e.g. tiger and starfish), or when the boundaries are blurred or jagged (e.g. camel). Quantitative evaluation: Table 1 presents segmentations of BSD images using our three variants: [7]+Ours, [5]+Ours, and ([3]+[1])+Ours. We consider the standard metrics: Probabilistic Rand Index (P RI), and Variation of Information (V I) [26]. P RI between estimated and ground-truth segmentations, S and G, is defined as the sum of the number of pairs of pixels that have the same label in S and G, and those that have different labels in both segmentations, divided by the total number of pairs of pixels. V I measures the distance between S and G in terms of their average conditional entropy. P RI should be large, and V I small. For all variants of our approach, we run the MWIS algorithm 10 times, starting from different initial points, and report the average P RI and V I values. For [7], we report their best results obtained by an exhaustive search for the optimal value of their input parameter Pb . As can be seen, [7]+Ours does not hand-pick the optimal input parameters, and outperforms the best results of original [7]. Surprisingly, when working with 6 Algorithm 1: Our MWIS Algorithm Input: Graph G including w and A, convergence threshold ?, regularization parameter ? = 2 Output: The MWIS of G denoted as x? 1 T T 1 Define IQP objective: f (x) , w x ? 2 ?x Ax ; ? (0) n (0) 2 Initialize t=0, and x =0, y ?{0, 1} , y 6=0; 3 repeat 4 Find h(y, y (t) ) as in (3); ? argmaxx?{0,1}n h(x, y (t) ) ; Use (4) for x= 5 ? ? f (y (t) ) then 6 if f (x) ?; 7 y (t+1) = x 8 else 9 Use (5) for  ? (t) ) ?= argmax f y (t) +?(x?y ??[0,1] 10 11 12 13 14 15 ? ? y (t) ) ; y (t+1) = y (t) + ?(x end ? ? f (x? ) then if f (x) ?; x? = x end until y (t+1) ? y (t) < ? ; Method Human [7] ([3]+[1])+MWC [5]+Ours ([3]+[1])+Ours [7]+Ours P RI 0.87 0.81 0.78 0.79 0.80 0.83 VI 1.16 1.68 1.75 1.69 1.71 1.59 Table 1: A comparison on BSD. Probabilistic Rand Index (P RI) should be large, and Variation of Information (V I) small. Input segments are generated by the methods of [7, 5, 3, 1], and then selected by the maximum weight clique formulation (MWC) of [17], or by our algorithm. For [7], we report their best results obtained by an exhaustive search for the optimal value of their input parameter Pb . segments generated by Meanshift, Ncuts, and [5], the performances of [5]+Ours and ([3]+[1])+Ours come very close to those of [7]. This is unexpected, because Meanshift, Ncuts, and the method of [5] are known to produce poor performance in terms of P RI and V I values, relative to [7]. Also, note that ([3]+[1])+Ours outperforms the relaxation-based method ([3]+[1])+MWC. Fig. 2 shows the sensitivity of the convergence rate of our approach to a specific choice of ?. The penalty term ?y T Ay of the IQP objective function is averaged over all 200 graphs, each with about 300 nodes, obtained from 200 BSD images. As can be seen, for ? ? 2, the penalty term ?y T Ay converges to 0 with some initial oscillations. Experimentally, the convergence rate is maximum when ? = 2. We use this value in all our experiments. Method [12] avg sec Ours avg sec Figure 2: Convergence rate vs. a specific choice of ?, averaged over 200 BSD images: ? < 2 is marked red, and ? ? 2 is marked blue. b2500 [24] 2 74 0 21 p3000-7000 [12] 175 1650 62 427 Table 2: Average of solution difference, and computation time in seconds for problem sets from [24] and [12]. MWIS performance: We also test our Alg. 1 on two sets of problems beyond image segmentation. As input we use a graph constructed from data from the OR-Library [24], and from the problem sets presented in [12]. For the first set of problems (b2500), we only consider the largest graphs. We use ten instances, called b2500-1 to b2500-10, of size 2500 and with density 10%. For the second set of problem (p3000 to p7000), we take into account graphs of size 4000, 5000, 6000 and 7000. Five graph instances per size are used. Tab. 2 shows the average difference between the estimated and ground-truth solution, and computation time in seconds. The presented comparison with Iterative Tabu Search (ITS) [12] demonstrates that, on average, we achieve better performance, under much smaller running times. 7 Figure 3: Segmentation of BSD images. (top) Original images. (bottom) Results using our variant [7]+Ours. Failures, such as the painters? shoulder, the bird?s lower body part, and the top left fish, occur simply because these regions are not present in the input segmentations. Figure 4: Comparison with the state-of-the-art segmentation algorithms on BSD images. (top row) Original images. (middle row) The three left results are from [7], and the rightmost result is from [25]. (bottom row) Results of [7]+Ours. By extracting ?meaningful? segments from a segmentation hierarchy produced by [7] we correct the best, manually optimized results of [7]. 5 Conclusion To our knowledge, this is the first attempt to formulate image segmentation as MWIS. Our empirical findings suggest that this is a powerful framework that permits good segmentation performance regardless of a particular MWIS algorithm used. We have presented a new fixed point algorithm that efficiently solves MWIS, with complexity O(|E|), on a graph with |E| edges, and proved that the algorithm converges to a maximum. Our MWIS algorithm seeks a solution directly in the discrete domain, instead of resorting to the relaxation, as is common in the literature. We have empirically observed that our algorithm runs faster and outperforms the other competing MWIS algorithms on benchmark datasets. Also, we have shown a comparison with the state-of-the-art segmenter [7] on the benchmark Berkeley segmentation dataset. Our selection of ?meaningful? regions from a segmentation hierarchy produced by [7] outperforms the manually optimized best results of [7], in terms of Probabilistic Rand Index and Variation of Information. 8 References [1] J. Shi and J. Malik, ?Normalized cuts and image segmentation,? IEEE TPAMI, vol. 22, no. 8, pp. 888?905, 2000. [2] M. Pavan and M. Pelillo, ?Dominant sets and pairwise clustering,? IEEE TPAMI, vol. 29, no. 1, pp. 167? 172, 2007. [3] D. Comaniciu and P. Meer, ?Meanshift: a robust approach toward feature space analysis,? IEEE TPAMI, vol. 24, no. 5, pp. 603?619, 2002. [4] M. Kass, A. Witkin, and D. Terzopoulos, ?Snakes: Active contour models,? IJCV, vol. V1, no. 4, pp. 321?331, 1988. [5] N. Ahuja, ?A transform for multiscale image segmentation by integrated edge and region detection,? IEEE TPAMI, vol. 18, no. 12, pp. 1211?1235, 1996. [6] X. Ren, C. Fowlkes, and J. Malik, ?Learning probabilistic models for contour completion in natural images,? IJCV, vol. 77, no. 1-3, pp. 47?63, 2008. [7] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, ?From contours to regions: An empirical evaluation,? in CVPR, 2009. [8] S. Bagon, O. Boiman, and M. Irani, ?What is a good image segment? A unified approach to segment extraction,? in ECCV, 2008. [9] S. Todorovic and N. Ahuja, ?Texel-based texture segmentation,? in ICCV, 2009. [10] B. Russell, A. Efros, J. Sivic, B. Freeman, and A. Zisserman, ?Segmenting scenes by matching image composites,? in NIPS, 2009. [11] L. Trevisan, ?Inapproximability of combinatorial optimization problems,? Electronic Colloquium on Computational Complexity, Tech. Rep. TR04065, 2004. [12] G. Palubeckis, ?Iterated tabu search for the unconstrained binary quadratic optimization problem,? Informatica, vol. 17, no. 2, pp. 279?296, 2006. [13] D. Warrier, W. E. Wilhelm, J. S. Warren, and I. V. Hicks, ?A branch-and-price approach for the maximum weight independent set problem,? Netw., vol. 46, no. 4, pp. 198?209, 2005. [14] S. Sanghavi, D. Shah, and A. S. Willsky, ?Message-passing for max-weight independent set,? in NIPS, 2007. [15] M. Groetschel, L. Lovasz, and A. Schrijver, ?Polynomial algorithms for perfect graphs,? in Topics on Perfect Graphs, C. Berge and V. Chvatal, Eds. North-Holland, 1984, vol. 88, pp. 325 ? 356. [16] M. Todd, ?Semidefinite optimization,? Acta Numerica, vol. 10, pp. 515?560, 2001. [17] I. M. Bomze, M. Pelillo, and V. Stix, ?Approximating the maximum weight clique using replicator dynamics,? IEEE Trans. Neural Net., vol. 11, no. 6, pp. 1228?1241, 2000. [18] S. Busygin, C. Ag, S. Butenko, and P. M. Pardalos, ?A heuristic for the maximum independent set problem based on optimization of a quadratic over a sphere,? Journal of Combinatorial Optimization, vol. 6, pp. 287?297, 2002. [19] M. P. Kumar and D. Koller, ?Efficiently selecting regions for scene understanding,? in CVPR, 2010. [20] M. Leordeanu, M. Hebert, and R. Sukthankar, ?An integer projected fixed point method for graph matching and MAP inference,? in NIPS, 2009. [21] S. Gold and A. Rangarajan, ?A graduated assignment algorithm for graph matching,? IEEE TPAMI, vol. 18, no. 4, pp. 377?388, 1996. [22] M. Varma and R. Garg, ?Locally invariant fractal features for statistical texture classification,? in ICCV, 2007. [23] D. Martin, C. Fowlkes, D. Tal, and J. Malik, ?A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,? in ICCV, 2001. [24] J. E. Beasley, ?Obtaining test problems via internet,? Journal of Global Optimization, vol. 8, no. 4, pp. 429?433, 1996. [25] M. Galun, E. Sharon, R. Basri, and A. Brandt, ?Texture segmentation by multiscale aggregation of filter responses and shape elements,? in ICCV, 2003, pp. 716?723. [26] R. Unnikrishnan, C. Pantofaru, and M. Hebert, ?Toward objective evaluation of image segmentation algorithms,? IEEE TPAMI, vol. 29, no. 6, pp. 929?944, 2007. 9
3909 |@word trial:1 middle:1 polynomial:1 seems:2 norm:1 suitably:1 bf:2 seek:4 decomposition:1 brightness:1 pick:1 initial:2 series:5 uncovered:1 selecting:3 ours:19 rightmost:1 outperforms:6 existing:1 current:1 ka:1 si:12 written:1 must:1 partition:4 shape:2 grass:1 v:1 greedy:1 selected:7 ith:2 provides:2 node:19 brandt:1 simpler:2 five:1 along:2 constructed:1 qualitative:2 consists:3 prove:2 ijcv:2 pairwise:1 freeman:1 decreasing:4 researched:1 actual:1 cardinality:1 increasing:3 itera:1 begin:1 estimating:1 notation:1 maximizes:2 agnostic:1 what:1 ret:1 finding:4 unified:1 ag:1 guarantee:2 berkeley:4 every:8 binarized:1 quantitative:2 bomze:1 classifier:1 demonstrates:1 arguably:1 segmenting:1 positive:5 engineering:1 local:4 todd:1 tends:1 abounds:1 id:1 meet:1 path:1 interpolation:5 bird:1 acta:1 garg:1 relaxing:1 challenging:1 range:1 averaged:2 unique:3 hood:1 lost:1 union:1 definite:2 implement:1 ucm:4 maire:1 area:6 empirical:3 composite:1 matching:3 pre:1 integrating:1 suggest:1 close:3 selection:3 operator:1 applying:1 sukthankar:1 equivalent:1 elongated:1 map:1 shi:1 maximizing:1 conceptualized:1 go:1 latest:1 starting:1 straightforward:1 dilating:1 convex:3 layout:1 formulate:3 formalized:2 identifying:2 assigns:1 immediately:1 tabu:3 varma:1 meer:1 notion:1 variation:4 hierarchy:7 suppose:2 programming:1 homogeneous:1 us:4 hypothesis:3 associate:1 element:6 genuinely:1 cut:2 beasley:1 labeled:1 database:1 bottom:2 observed:1 electrical:1 capture:2 solved:1 region:20 ensures:1 connected:1 morphological:2 russell:1 balanced:1 intuition:1 colloquium:1 complexity:13 gpb:4 dynamic:4 segmenter:1 depend:1 solving:2 segment:47 serve:1 distinctive:2 bipartite:1 easily:3 represented:3 distinct:8 describe:3 pertaining:1 neighborhood:4 exhaustive:4 whose:2 richer:1 heuristic:2 posed:1 solve:1 larger:1 tightness:1 otherwise:3 cvpr:2 statistic:1 transform:1 itself:1 final:4 ip:4 sequence:3 differentiable:1 tpami:6 net:1 iff:3 achieve:1 gold:1 forth:1 description:1 convergence:9 cluster:2 optimum:2 rangarajan:1 produce:1 perfect:2 converges:9 object:12 derive:1 stating:1 completion:1 school:1 pelillo:2 solves:4 berge:1 auxiliary:1 indicate:1 come:1 correct:2 filter:4 human:2 material:1 pardalos:1 adjacency:3 clustered:1 proposition:1 elementary:1 strictly:1 hold:3 around:6 ground:2 visually:2 scope:1 efros:1 dictionary:2 vary:3 combinatorial:4 label:2 visited:1 grouped:1 largest:1 repetition:1 minimization:1 lovasz:1 iqp:10 gaussian:1 always:1 mation:1 rather:1 occupied:2 avoid:1 shelf:1 starfish:2 ax:5 unnikrishnan:1 rank:1 superpixels:1 tech:1 contrast:2 detect:1 inference:1 integrated:1 typically:1 entire:4 eliminate:2 snake:1 koller:1 pantofaru:1 selects:1 interested:1 pixel:10 dual:1 classification:1 denoted:3 art:6 spatial:3 special:1 initialize:1 construct:4 having:1 extraction:1 manually:3 represents:1 thin:1 photometric:1 np:1 others:1 report:3 sanghavi:1 few:2 divergence:1 individual:1 argmax:2 consisting:1 william:1 psd:3 attempt:1 detection:2 organization:1 message:3 evaluation:7 yielding:1 semidefinite:1 implication:1 accurate:2 edge:10 capable:1 respective:2 filled:1 taylor:6 theoretical:2 instance:2 formalism:1 soft:2 cover:3 formulates:2 bagon:1 measuring:1 assignment:2 maximization:1 loopy:1 subset:8 conducted:1 optimally:1 connect:2 pavan:1 eec:1 synthetic:1 density:1 fundamental:1 sensitivity:1 probabilistic:4 off:1 picking:2 together:2 meanshift:7 heaviest:2 account:1 sec:8 summarized:1 north:1 blurred:1 oregon:1 mwis:57 jagged:1 depends:2 vi:1 try:1 closed:1 optimistic:1 lab:1 tab:1 red:3 start:2 aggregation:1 contribution:1 minimize:2 brendel:1 painter:1 descriptor:2 efficiently:2 ensemble:10 yield:2 identify:1 ofthe:1 boiman:1 iterated:2 produced:4 none:1 ren:1 explain:2 ed:1 definition:3 evaluates:1 failure:1 energy:3 pp:16 naturally:2 associated:1 attributed:2 proof:3 dataset:4 proved:2 knowledge:3 color:1 improves:2 segmentation:52 back:1 thinning:1 tolerate:1 supervised:1 response:2 maximally:2 rand:3 zisserman:1 formulation:9 done:3 delineate:1 evaluated:2 generality:1 until:4 hand:3 working:1 expressive:1 nonlinear:1 overlapping:2 propagation:1 maximizer:3 multiscale:4 mode:1 facilitate:1 normalized:3 contain:2 ncuts:4 y2:11 regularization:2 reformulating:1 irani:1 iteratively:3 leibler:1 adjacent:3 branching:1 comaniciu:1 ay:10 reasoning:1 image:57 common:2 replicator:4 qp:3 overview:1 empirically:1 analog:1 refer:1 corvallis:1 ai:1 unconstrained:1 resorting:1 surface:3 owt:4 dominant:2 own:2 codeword:1 nonconvex:1 ecological:1 binary:2 continue:1 rep:1 captured:1 minimum:2 matrixvector:1 relaxed:2 rotationally:1 seen:3 converge:2 maximize:1 ii:2 branch:2 semi:2 multiple:1 witkin:1 hick:1 segmented:2 smooth:1 faster:2 characterized:1 sphere:1 divided:1 host:1 visit:5 laplacian:1 converging:1 variant:6 vision:1 metric:1 iteration:18 represent:1 histogram:2 addition:1 else:3 eliminates:2 strict:1 undirected:2 integer:5 extracting:2 camel:2 near:1 chvatal:1 iii:1 relaxes:2 xj:3 affect:1 graduated:2 bandwidth:2 competing:1 whether:1 expression:1 penalty:2 returned:1 reformulated:1 passing:3 todorovic:2 fractal:1 ignored:2 covered:1 amount:1 transforms:1 mid:1 ten:1 locally:1 informatica:1 exist:1 fish:1 estimated:3 correctly:1 per:2 blue:1 diverse:1 discrete:19 numerica:1 vol:15 group:1 key:1 smin:2 easiness:1 threshold:2 pb:4 segmenters:4 v1:1 sharon:1 graph:35 relaxation:6 sum:2 convert:1 run:3 powerful:1 electronic:1 oscillation:1 draw:1 capturing:1 bound:1 internet:1 guaranteed:1 quadratic:6 encountered:1 occur:2 constraint:4 scene:3 n3:1 ri:7 tal:1 min:3 formulating:1 kumar:1 relatively:3 martin:1 bsd:10 poor:1 beneficial:1 slightly:1 describes:1 smaller:1 wi:6 lp:2 b:2 invariant:2 iccv:4 ay1:2 taken:2 mutually:2 describing:1 end:2 available:2 operation:1 permit:2 hierarchical:1 occurrence:3 fowlkes:3 robustness:2 shah:2 original:8 denotes:2 running:1 top:3 regardless:1 clustering:1 const:2 unsuitable:1 restrictive:1 especially:1 approximating:1 objective:15 malik:4 added:2 already:2 mumford:1 codewords:4 said:1 lends:1 distance:1 arbelaez:1 topic:1 trivial:1 toward:2 willsky:1 length:1 trevisan:1 index:4 wilhelm:1 equivalently:1 difficult:1 negative:4 rise:1 unknown:1 upper:1 conversion:1 datasets:2 benchmark:7 shoulder:1 y1:14 dc:1 intensity:1 compositionality:1 complement:1 namely:1 pair:2 kl:2 optimized:3 orst:1 sivic:1 nip:3 trans:1 able:2 beyond:2 below:4 usually:2 pattern:1 program:4 including:3 green:1 max:3 belief:1 overlap:11 suitable:2 difficulty:1 natural:2 indicator:1 representing:1 library:3 numerous:1 extract:1 prior:1 literature:2 oregonstate:1 understanding:1 multiplication:1 relative:2 texel:1 bank:1 row:4 eccv:1 surprisingly:1 repeat:1 hebert:2 aij:9 warren:1 terzopoulos:1 distinctiveness:3 sparse:1 tolerance:1 boundary:7 world:1 valid:1 contour:3 evaluating:1 made:1 refinement:1 avg:2 projected:1 sj:6 approximate:2 netw:1 kullback:1 basri:1 clique:4 approxi:1 active:2 global:1 xi:11 continuous:13 search:7 iterative:1 reviewed:1 table:3 robust:1 obtaining:1 argmaxx:4 alg:10 expansion:3 complex:2 constructing:1 domain:16 significance:1 main:2 galun:1 n2:2 verifies:1 body:1 fig:7 ahuja:2 candidate:6 perceptual:1 third:2 specific:3 xt:4 maxi:1 exists:3 texture:7 execution:1 illumination:1 occurring:1 hole:4 easier:1 entropy:1 intersection:2 simply:1 likely:2 visual:3 sinisa:2 ncut:2 kxk:1 unexpected:1 partially:1 inapproximability:1 holland:1 leordeanu:1 truth:2 extracted:3 conditional:1 goal:2 marked:4 consequently:2 price:2 hard:3 change:1 included:1 specifically:2 tiger:2 experimentally:1 wt:5 miss:1 lemma:3 total:3 called:2 pas:1 accepted:1 experimental:1 schrijver:1 meaningful:9 select:2 support:1 latter:1 arises:1 pertains:1 evaluate:1
3,212
391
Designing Linear Threshold Based Neural Network Pattern Classifiers Terrence L. Fine School of Electrical Engineering Cornell University Ithaca, NY 14853 Abstract The three problems that concern us are identifying a natural domain of pattern classification applications of feed forward neural networks, selecting an appropriate feedforward network architecture, and assessing the tradeoff between network complexity, training set size, and statistical reliability as measured by the probability of incorrect classification. We close with some suggestions, for improving the bounds that come from VapnikChervonenkis theory, that can narrow, but not close, the chasm between theory and practice. 1 Speculations on Neural Network Pattern Classifiers (1) The goal is to provide rapid, reliable classification of new inputs from a pattern source. Neural networks are appropriate as pattern classifiers when the pattern sources are ones of which we have little understanding, beyond perhaps a nonparametric statistical model, but we have been provided with classified samples of features drawn from each of the pattern categories. Neural networks should be able to provide rapid and reliable computation of complex decision functions. The issue in doubt is their statistical response to new inputs. (2) The pursuit of optimality is misguided in the context of Point (1). Indeed, it is unclear what might be meant by 'optimality' in the absence of a more detailed mathematical framework for the pattern source. (3) The well-known, oft-cited 'curse of dimensionality' exposed by Richard Bellman may be a 'blessing' to neural networks. Individual network processing nodes (e.g., linear threshold units) become more powerful as the number of their inputs increases. For a large enough number n of points in an input space of d dimensions, the number of dichotomies that can be generated by such a node grows exponentially in d. This suggests that, unlike all previous efforts at pattern classification that required substantial effort directed at the selection of low-dimensional feature vectors so as to make the decision rule calculable, we may now be approaching a 811 812 Fine position from which we can exploit raw data (e.g., the actual samples in a time series or pixel values in an image). Even if we are as yet unable to achieve this, it is clear from the reports on actual pattern classifiers that have been presented at NIPS90 and the accompanying Keystone Workshop that successful neural network pattern classifiers have been constructed that accept as inputs feature vectors having hundreds of components (e.g., Guyon, et al. [1990]). (4) The blessing of dimensionality is not granted if there is either a large subset of critically important components that will force the network to be too complex or a small subset that contains almost all of the information needed for accurate discrimination. The network is liable to be successful in those cases where the input or feature vector ~ has components that are individually nearly irrelevant, although collectively they enable us to discriminate well. Examples of such feature vectors might be the responses of individual fibers in the optic nerve, a pixel array for an image of an alphanumeric character, or the set of time samples of an acoustic transient. No one fiber, pixel value, or time sample provides significant information as to the true pattern category, although all of them taken together may enable us to do nearly error-free classification. An example in which all components are critically important is the calculation of parity. On our account, this is the sort of problem for which neural networks are inappropriate, albeit it has been repeatedly established that they can calculate parity. \Ve interpret 'critically important' very weakly as meaning that the subspace spanned by the subset of critically important features/inputs needs to be pa.rtitioned by the classifier so that there is at least one bounded region. If the nodes are linear threshold units then to carve out a bounded region, minimally a simplex, in a subspace of dimension c, where c is the size of the subset of critically important inputs, will require a network having at least c + 1 nodes in the first layer. (5) Neural networks have opened up a new application domain wherein in practice we can intelligently construct nonlinear pattern classifiers characterized by thousands of parameters. In practice, nonlinear statistical models, ones not defined in terms of a covariance matrix, seem to be restricted to a few parameters. (6) Nonetheless, Occam's Razor advises us to be sparing of parameters. Vle should be particularly cautious about the problem of overfitting when the number of parameters in the network is not much less than the number of training samples. Theory needs to provide practice with better insight and guidelines for avoiding overfitting and for the use of restrictions on training time as a guard against overfitting a system with almost as many adjustable parameters as there are data points. (7) Points (1) and (5) combine to suggest that analytical approaches to network performance evaluation based upon typical statistical ideas may either be difficult to carry out or yield conclusions of little value to practice. There is no mismatch between statistical theory and neural networks in principle, but there does seem to be a significant mismatch in practice. While we are usually dealing with thousands of training samples, the complexity of the network means that we are not in a regime where asymptotic analyses (large sample behavior) will prove informative. On the other hand, the problem is far to complex to be resolved by 'exact' small sample analyses. These considerations serve to validate the widespread use of simulation studies to assess network design and performance. Designing Linear Threshold Based Neural Network Pattern Classifiers 2 2.1 The QED Architecture QED Overview One may view a classifier as either making the decision as to the correct class or as providing 'posterior' probabilities for the various classes. If we adopt the latter approach, then the use of sigmoidal units having a continuum of responses is appropriate. If, however, we adopt the first approach, then we require hard-limiting devices to select one of only finitely many (in our case only two) pattern classes. This is the approach that we adopt and it leads us to reliance upon linear threshold units (LTUs). We have focused our attention upon a flexible architecture consisting of a first hidden layer that is viewed as a quantizer of the input feature vector ~ and is therefore referred to as the Q-Iayer. The binary outputs from the Q-Iayer are then input to a second hidden layer whose function is to expand the dimension of the set of Q-Iayer outputs. The E-Iayer enables us to exploit the blessing of dimensionality in that by choosing it wide enough we can ensure that all Boolean functions of the binary outputs of the Q-Iayer are now implementable as linearly separable funct.ions of the E-Iayer outputs. Hence, to implement a binary classifier we need a t.hird layer consisting of only a single node to effect the desired decision, and this output layer is referred to as the D-Iayer. The layers taken together are called a QED architecture. 2.2 Constructing the Q-Layer The first layer in a feedforward neural network having LTUs can always be viewed as a quantizer. Subsequent layers in the network only see the input ~ through the window provided by the first layer quantization. We do not expect to be able to quantize/partition the input space, say Rd for large d, into many small compact regions; to do so would require that m > > d, as noted in Point (4) of the preceding section. Hence, asymptotic results drawn from deterministic approximation theory are unlikely to be helpful here. One might have recourse to the large literature on vector quantization (e.g., the special issue on quantization of the IEEE Transactions on Information Theory, March 1982), but we expect to quantize vectors of high dimension into a relatively small number of regions. Most of the informationtheoretic literature on vector quantization does not address this domain of very low information rate (bits/coordinate). A more promising direction is that of clustering algorithms (e.g., k-means as in Pollard [1982]' Darken and Moody [1990]) to guide the choice of Q-Iayer. 2.3 Constructing the E,D-Layers Space limitations prevent us from detailed discussion of the formation of the E,D layers. In brief, the E-Iayer can be composed of 2m , often fewer, nodes where the weights to the ith node from the m Q-Iayer nodes are a binary representation of the index i with '0' replaced by '-1 '. No training is required for the E-Iayer. The desired D-Iayer responses of 0 or 1 are formed simply by assigning weight t to connections from E-Iayer nodes corresponding to input patterns from class t, and summing and thresholding at 1/2. The training set. T must be consulted to determine, say, on 813 814 Fine the basis of majority rule, the category t E {O, I} to assign to a given E-Iayer node. 2.4 The Width of the Q-Layer The overall complexity of the QED net depends upon the number m of nodes in the Q-Iayer. Hence, our proposal will only be of practical interest if m need not be large. As a first argument concerning the size of this parameter, if m ~ d then m hyperplanes in general position partition R d into 2m regions/cells. These cells are only of interest to us if we know how to assign them to pattern classes. From the perspective of Point (1) in the preceding section, we can only determine a classification of a cell if we have classified data points lying in the cell. Thus, if we wish to make rational use of m nodes in the Q-Iayer, then we should have in excess of 2m data points in our training set. If we have fewer data points in T then we will be generating a multitude of cells about whose categorization we know no more than that provided by possibly known prior class probabilities. Another estimate of the required sample size is obtained by assuming that data points are placed at random in the cells. In this case results summarized and improved on in Flatto [1982] suggest that we will need in excess of m2m points to have a reasonable proba.bility of having all cells occupied by data points. Many of the experimental studies reported at the meeting and workshops of NIPS90 assumed training set sizes no larger than about 10,000, implying that we need not consider m in excess of about 10. This number of nodes still yields a tractable QED architecture. A second argument on which to base an a priori determination of m can be made by considering the problem-average performance analyses carried out by Hughes [1968]. He found that the probability of correct classification for a randomly selected classification problem, with equal prior probabilities for selecting a class, varied with the number M of possible feature values as ~~::::;. This conclusion would suggest that a Q-Iayer containing as few as five properly selected nodes would suffice (Point (2)) for the design of a good pattern classifier. In any event, both of our arguments suggest that a QED net having no more than about 10 Q-Iayer nodes might be adequate for many applications. At worst we would have to contemplate about 1,000 nodes in the E-Iayer, and this is not a prohibitively large number given current directions in hardware development. Nonetheless, the contradiction between our suggestions and current practice suggests that our conclusions are only tentative, and they need to be explored through applications, simulations, and studies of statistical generalization ability. 3 Sketch of Vapnik-Chervonenkis Theory of Statistical Generalization We assume that there are two pattern classes labelled by t E {O, I}. A pattern sample is reduced by a preprocessor to a feature vector ~ E Rd. Point (3) expresses the goal of having this reduction be significantly less than would be required by an approach that does not use neural networks. Neural networks are generica.lly labelled by 1] : Rd - {O, I}, 1](~) t. N {1]} denotes the family of networks described by an architecture. As above, m denotes the width of the first hidden = = Designing Linear Threshold Based Neural Network Pattern Classifiers layer, and M denotes the number of cells/regions into which a net in N can partition Rd. Typically, M = 2m. The training set T = {(~,ti),i = 1,n}. We hypothesize that the elements of Tare i.i.d. as P(~, t), which is unknown. Performance is measured by error probabilities, E(1]) =P(1](?) # t). A good (it need not be unique) net in the family N is 1]0 = argminl1 EAI'E(71), E{1]?) = minE(1]). l1EN ?B denotes the Bayes error probability calculated on the basis of P(?, t). The empirical error frequency liT ( 1]) sustained by net 1] applied to T is 1 IIT(1]) n = -n.L 11](?;) - ,=1 til? A net in N having good classification performance on the training set T is 1]* = argmi~EAI'"T(1]). By definition, Let mAl'( n) denote the VC growth function- the maximum, taken over all sets of n points in the input space, of the number of subsets that can be generated by the classification functions in N. Let VAI' denote the VC capacity, the largest n for which N can generate all 2n of the subsets of some such set of n points. For n > VAl' , Vapnik-Chervonenkis theory (Vapnik [1982]' Pollard [1984], Baum and Haussler [1989]) can be adapted to yield the VC upper bound P(E(1]*) - E(1]?) ~ () ~ 6(2~t~ e-nf2/16 = 6e V ,N" log2n-logV,N" !-n!2/ 16 ? AI'. Let nc denote the critical value of sample size n for which the exponent first becomes negative. If n < nc then the upper bound will exceed unity and be uninformative. However, if n > nc then the upper bound will converge to zero exponentially fast in sample size. An approximate solution for nc from the VC upper bound yields 16 32e 32e nc ~ 2" VAl' (log - 2 + log log - 2 ). ( ( ( If for purposes of illustration we take ( = .1, VAl' = 50, then we find that nc ~ 902,000. This conclusion, obtained by a direct application of Vapnik-Chervonenkis theory, disagrees by orders of magnitude with the experience of practitioners gained in training such low-complexity networks (about 50 connections). 4 Tightening the VC Argument There are several components of the derivation of VC bounds that involve approximations and these, therefore, can be sources for improving these bounds. These 815 816 Fine approximations include recourses to Chernoff/Hoeffding bounds, union bounds, estimatesofm,N'(n), and the relation between &(1/*)-&(1/0) and 2supf/ 1117(1/)-&(1])1. There is a belief among members of the neural network community that the weakness of the VC argument lies in the fact that by dealing with all possible underlying distributions l' it is dealing with the worst case, and this worst case forces the large sample sizes. We agree with all but the last part of this belief. VC arguments being independent of the choice of l' do indeed have to deal with worst cases. However, the worst case is dealt with through recourse to Chernoff/Hoeffding inequalities, and it is easily shown that these inequalities are not the source of our difficulties. A more promising direction in which to seek realistic estimates of training set size is through reductions in m,N'( n) achieved through constraints on the architecture N. One such restriction is through training time bounds that in effect restrict the portion of N that can be explored. Two other restrictions are discussed below. 5 5.1 Restricting the Architecture Parameter Quantization We can control the growth function contribution by quantizing all network parameters to k bits and thereby restricting N. The VC dimension of a LTU with parameters quantized to k ~ 1 bits equals the VC dimension of the LTU with realvalued parameters. Hence, VC arguments show no improvement. However, there are now only 2 km (d+l) distinct first layers of m nodes accepting vectors from Rd. Hence, there are no more than 2 2m + km (d+l) QED nets, and the restricted N has only finitely many members. Direct application of the union bound and Chernoff inequality yield 1'(&(1/*) - &(1/0) ~ i) ~ 22+2m+km(d+l)e-nf2/2. = When i = .1, m = 5, d 10 this bound becomes less than unity for n > nc = 4710 + 7625k. Thus, even I-bit quantization suggests a training sample size 111 excess of 4700 for reliable generalizat.ion of even this simple network. 5.2 Clustering The growth function m,N'(n) 'overest.imates' the number of cases we need to be concerned with in dealing with the random variable Z(1]) = 1117(1/) - 117 (1/)1 encountered in VC theory derivations. \Ve are only interested in whether Z exceeds a prescribed precision level f, and not whether, say, Z(1/d differs from Z(172) by as little as ~ due to 1]2 disagreeing with 1/1 at only a single sample point. 1 To enforce consideration of networks as being different only if they yield classifications of T disagreeing substantially with each other we might proceed by clustering the points in T into I\, clusters for each of the two classes. We then train the network so that decision boundaries do not subdivide individual clusters (see also Devroye and Wagner [1979]). The union bound and Chernoff inequality yield 1'(&(1]*) - &(1/0) ~ f) ~ 22+4"e-nf2/2, a result that is independent of the input dimension d. Designing Linear Threshold Based Neural Network Pattern Classifiers If we again choose ( = .1 then the sample size n required to make this upper bound less than unity is about 280 + 560K. For accuracy at the precision level { we should expect to have K ~ 1/(. Hence, the least acceptable sample size should exceed 5,880. If we hope to make full use of the capabilities of the net, then we should expect to have clusters in almost all of the 2m cells. If we take this to mean that 21>: 2m , then n > 9,240 for m 5. If clusters were equally likely to fall into each of the M cells then we would require M (log M + Q') clusters for a probability of no empty cell being approximately e-e- a (e.g., Flatto [1982]). Roughly, for m = 5 we should then aim for 21>: = 110 and a sample size exceeding 31,000. Large as this estimate is, it is still a factor of 30 below what a direct application of VC theory yields. = = Acknowledgements I wish to thank Thomas W. Parks for insightful remarks on severa.l of the topics discussed above. This paper was prepared with partial support from DARPA through AFOSR-900016A. References Baum, E., D. Haussler [1989], What size net gives valid generalization?, in D. Touretzky, ed., Advances in Neural Information Processing Systems 1, Morgan Kaufman Pub., 81-90. Darken, C., J. Moody [1990], Fast adaptive k-means clustering, NIPS90. Devroye, L., T. Wagner [1979], Distribution-free bounds with the resubstitution error estimate, IEEE Trans. on Information Theory, IT-25, 208-210. Flatto, L. [1982], Limit theorems for some random variables associated with urn models, Annals of Probability, 10, 927-934. Guyon, I., P. Albrecht, Y. Le Cun, J. Denker, W. Hubbard [1990], Design of a neural network character recognizer for a t.ouch terminal, listed as to appear in Pattern Recognition, presented orally by Le Cun at the 1990 Keystone Workshop. Hughes, G. [1968], On the mean accuracy of statistical pattern recognizers, IEEE Trans. on Information Theory, 14, 55-63. Pollard, D. [1982], A central limit theorem for k-means clustering, Annals of Probability, 10, 919-926. Pollard, D. [1984], Convergence of Stochastic Processes, Springer Verlag. Vapnik, V. [1982], Estimation of Dependences Based on Empirical Data, Springer Verlag. 817
391 |@word km:3 seek:1 simulation:2 covariance:1 thereby:1 carry:1 reduction:2 series:1 contains:1 selecting:2 chervonenkis:3 pub:1 current:2 yet:1 assigning:1 must:1 subsequent:1 partition:3 informative:1 alphanumeric:1 realistic:1 enables:1 hypothesize:1 discrimination:1 implying:1 fewer:2 device:1 selected:2 ith:1 accepting:1 provides:1 quantizer:2 node:17 quantized:1 severa:1 hyperplanes:1 sigmoidal:1 five:1 mathematical:1 guard:1 constructed:1 direct:3 become:1 incorrect:1 calculable:1 prove:1 sustained:1 combine:1 indeed:2 rapid:2 roughly:1 behavior:1 bility:1 terminal:1 bellman:1 little:3 curse:1 actual:2 inappropriate:1 window:1 considering:1 provided:3 becomes:2 bounded:2 suffice:1 underlying:1 generalizat:1 what:3 kaufman:1 substantially:1 ti:1 growth:3 prohibitively:1 classifier:13 control:1 unit:4 appear:1 engineering:1 limit:2 approximately:1 might:5 minimally:1 suggests:3 advises:1 directed:1 practical:1 unique:1 practice:7 hughes:2 implement:1 union:3 differs:1 flatto:3 empirical:2 significantly:1 suggest:4 close:2 selection:1 context:1 chasm:1 restriction:3 deterministic:1 baum:2 attention:1 focused:1 identifying:1 contradiction:1 rule:2 insight:1 array:1 haussler:2 spanned:1 coordinate:1 limiting:1 annals:2 orally:1 exact:1 designing:4 pa:1 element:1 recognition:1 particularly:1 disagreeing:2 sparing:1 electrical:1 worst:5 calculate:1 thousand:2 region:6 mal:1 substantial:1 complexity:4 mine:1 weakly:1 exposed:1 funct:1 serve:1 upon:4 basis:2 resolved:1 easily:1 darpa:1 iit:1 various:1 fiber:2 derivation:2 train:1 distinct:1 fast:2 dichotomy:1 formation:1 choosing:1 whose:2 larger:1 say:3 ability:1 quantizing:1 intelligently:1 analytical:1 net:9 achieve:1 validate:1 cautious:1 convergence:1 cluster:5 empty:1 assessing:1 generating:1 categorization:1 measured:2 finitely:2 school:1 come:1 direction:3 correct:2 opened:1 vc:13 stochastic:1 enable:2 transient:1 require:4 assign:2 generalization:3 accompanying:1 lying:1 continuum:1 adopt:3 purpose:1 recognizer:1 estimation:1 individually:1 largest:1 hubbard:1 nips90:3 hope:1 always:1 aim:1 occupied:1 cornell:1 properly:1 improvement:1 vapnikchervonenkis:1 helpful:1 unlikely:1 typically:1 accept:1 hidden:3 relation:1 expand:1 contemplate:1 interested:1 pixel:3 issue:2 classification:11 flexible:1 overall:1 among:1 priori:1 exponent:1 development:1 special:1 equal:2 construct:1 having:8 chernoff:4 lit:1 park:1 nearly:2 simplex:1 report:1 richard:1 few:2 randomly:1 composed:1 ve:2 individual:3 replaced:1 consisting:2 proba:1 interest:2 evaluation:1 weakness:1 accurate:1 partial:1 experience:1 desired:2 ltus:2 boolean:1 subset:6 hundred:1 successful:2 too:1 reported:1 cited:1 terrence:1 together:2 moody:2 again:1 central:1 containing:1 choose:1 possibly:1 hoeffding:2 til:1 albrecht:1 doubt:1 account:1 summarized:1 ltu:2 depends:1 view:1 portion:1 sort:1 bayes:1 capability:1 contribution:1 ass:1 formed:1 accuracy:2 yield:8 dealt:1 raw:1 critically:5 liable:1 classified:2 touretzky:1 ed:1 definition:1 against:1 nonetheless:2 frequency:1 associated:1 rational:1 dimensionality:3 nerve:1 feed:1 response:4 wherein:1 improved:1 hand:1 sketch:1 nonlinear:2 widespread:1 perhaps:1 grows:1 effect:2 true:1 hence:6 deal:1 width:2 razor:1 noted:1 image:2 meaning:1 consideration:2 overview:1 exponentially:2 discussed:2 he:1 interpret:1 vle:1 significant:2 ai:1 rd:5 reliability:1 recognizers:1 base:1 posterior:1 resubstitution:1 perspective:1 irrelevant:1 verlag:2 inequality:4 binary:4 meeting:1 morgan:1 preceding:2 determine:2 converge:1 full:1 exceeds:1 characterized:1 calculation:1 determination:1 concerning:1 equally:1 lly:1 qed:7 ion:2 cell:11 proposal:1 achieved:1 uninformative:1 fine:4 source:5 ithaca:1 unlike:1 member:2 seem:2 practitioner:1 feedforward:2 exceed:2 enough:2 concerned:1 architecture:8 approaching:1 restrict:1 idea:1 tradeoff:1 whether:2 granted:1 effort:2 pollard:4 proceed:1 repeatedly:1 adequate:1 remark:1 eai:2 detailed:2 clear:1 involve:1 listed:1 nonparametric:1 prepared:1 hardware:1 category:3 reduced:1 generate:1 express:1 reliance:1 threshold:7 drawn:2 prevent:1 powerful:1 almost:3 guyon:2 reasonable:1 family:2 decision:5 acceptable:1 bit:4 layer:15 bound:15 encountered:1 adapted:1 optic:1 constraint:1 tare:1 carve:1 misguided:1 argument:7 optimality:2 prescribed:1 urn:1 separable:1 relatively:1 march:1 character:2 unity:3 cun:2 making:1 restricted:2 taken:3 recourse:3 agree:1 needed:1 know:2 tractable:1 pursuit:1 denker:1 appropriate:3 enforce:1 subdivide:1 thomas:1 denotes:4 clustering:5 ensure:1 include:1 exploit:2 dependence:1 unclear:1 subspace:2 unable:1 thank:1 capacity:1 majority:1 topic:1 assuming:1 devroye:2 index:1 illustration:1 providing:1 nc:7 difficult:1 negative:1 vai:1 tightening:1 design:3 guideline:1 adjustable:1 unknown:1 upper:5 darken:2 implementable:1 consulted:1 varied:1 keystone:2 community:1 required:5 speculation:1 connection:2 tentative:1 acoustic:1 narrow:1 established:1 trans:2 address:1 beyond:1 able:2 usually:1 pattern:24 mismatch:2 below:2 regime:1 oft:1 reliable:3 belief:2 event:1 critical:1 natural:1 force:2 difficulty:1 imates:1 brief:1 realvalued:1 carried:1 prior:2 understanding:1 literature:2 disagrees:1 val:3 acknowledgement:1 asymptotic:2 afosr:1 expect:4 suggestion:2 limitation:1 principle:1 thresholding:1 occam:1 placed:1 parity:2 free:2 last:1 guide:1 wide:1 fall:1 wagner:2 boundary:1 dimension:7 calculated:1 valid:1 forward:1 made:1 adaptive:1 far:1 transaction:1 excess:4 approximate:1 compact:1 informationtheoretic:1 dealing:4 overfitting:3 summing:1 assumed:1 iayer:19 promising:2 improving:2 quantize:2 complex:3 constructing:2 domain:3 linearly:1 referred:2 ny:1 precision:2 position:2 wish:2 exceeding:1 lie:1 theorem:2 preprocessor:1 insightful:1 explored:2 multitude:1 concern:1 workshop:3 quantization:6 albeit:1 vapnik:5 restricting:2 gained:1 magnitude:1 supf:1 simply:1 likely:1 collectively:1 springer:2 goal:2 viewed:2 labelled:2 absence:1 hard:1 typical:1 blessing:3 called:1 discriminate:1 experimental:1 select:1 support:1 latter:1 m2m:1 meant:1 log2n:1 avoiding:1
3,213
3,910
A unified model of short-range and long-range motion perception Shuang Wu Department of Statistics UCLA Los Angeles , CA 90095 [email protected] Xuming He Department of Statistics UCLA Los Angeles , CA 90095 [email protected] Hongjing Lu Department of Psychology UCLA Los Angeles , CA 90095 [email protected] Alan Yuille Department of Statistics, Psychology, and Computer Science UCLA Los Angeles , CA 90095 [email protected] Abstract The human vision system is able to effortlessly perceive both short-range and long-range motion patterns in complex dynamic scenes. Previous work has assumed that two different mechanisms are involved in processing these two types of motion. In this paper, we propose a hierarchical model as a unified framework for modeling both short-range and long-range motion perception. Our model consists of two key components: a data likelihood that proposes multiple motion hypotheses using nonlinear matching, and a hierarchical prior that imposes slowness and spatial smoothness constraints on the motion field at multiple scales. We tested our model on two types of stimuli, random dot kinematograms and multiple-aperture stimuli, both commonly used in human vision research. We demonstrate that the hierarchical model adequately accounts for human performance in psychophysical experiments. 1 Introduction We encounter complex dynamic scenes in everyday life. As illustrated by the motion sequence depicted in Figure 1, humans readily perceive the baseball player?s body movements and the fastermoving baseball simultaneously. However, from the computational perspective, this is not a trivial problem to solve. The difficulty is due to the large speed difference between the two objects, i.e, the displacement of the player?s body is much smaller than the displacement of the baseball between the two frames. Separate motion systems have been proposed to explain human perception in scenarios like this example. In particular, Braddick [1] proposed that there is a short-range motion system which is responsible for perceiving movements with relatively small displacements (e.g., the player?s movement), and a long-range motion system which perceives motion with large displacements (e.g., the flying baseball), which is sometimes called apparent motion. Lu and Sperling [2] have further argued for the existence of three motion systems in human vision. The first and secondorder systems conduct motion analysis on luminance and texture information respectively, while the third-order system uses a feature-tracking strategy. In the baseball example, the first-order motion system would be used to perceive the player?s movements, but the third-order system would be required for perceiving the faster motion of the baseball. Short-range motion and first-order motion appear to apply to the same class of phenomena, and can be modeled using computational theories that are based on motion energy or related techniques. However, long-range motion and third-order 1 Figure 1: Left panel: Short-range and long-range motion: two frames from a baseball sequence where the ball moves with much faster speed than the other objects. Right panel: A graphical illustration of our hierarchical model in one dimension. Each node represents motion at different location and scales. A child node can have multiple parents, and the prior constraints on motion are expressed by parent-child interactions. motion employ qualitatively different computational strategies involving tracking features over time, which may require attention-driven processes. In contrast to these previous multi-system theories [2, 3], we develop a unified single-system framework to account for these phenomena of human motion perception. We model motion estimation as an inference problem which uses flexible prior assumptions about motion flows and statistical models for quantifying the uncertainty in motion measurement. Our model differs from the traditional approaches in two aspects. First, the prior model is defined over a hierarchical graph, see Figure 1, where the nodes of the graph represent the motion at different scales. This hierarchical structure is motivated by the human visual system that is organized hierarchically [8, 9, 4]. Such a representation makes it possible to define motion priors and contextual effects at a range of different scales, and so differs from other models of motion perception based on motion priors [5, 6]. This model connects lower level nodes to multiple coarser-level nodes, resulting in a loopy graph structure, which imposes a more flexible prior than tree-structured models (eg. [7]). We define a probability distribution on this graph using potentials defined over the graph cliques to capture spatial smoothness constraints [10] at different scales and slowness constraints [5, 11, 12, 13]. Second, our data likelihood terms allow a large space of possible motions, which include both short-range and long-range motion. Locally, the motion is often highly ambiguous (e.g., the likelihood term allows many possible motions) which is resolved in our model by imposing the hierarchical motion prior. Note that we do not coarsen the image and do not rely on coarse-to-fine processing [14]. Instead we use a bottom-up compositional/hierarchical approach where local hypotheses about the motion are combined to form hypotheses for larger regions of the image. This enables us to deal simultaneously with both long-range and short-range motion. We tested our model using two types of stimuli commonly used in human vision research. The first stimulus type are random dot kinematograms (RDKs), where some of the dots (the signal) move coherently with large displacements, whereas other dots (the noise) move randomly. RDKs are one of the most important stimuli used in both physiological and psychophysical studies of motion perception. For example, electrophysiological studies have used RDKs to analyze the neuronal basis of motion perception, identifying a functional link between the activity of motion-selective neurons and behavioral judgments of motion perception [15]. Psychophysical studies have used RDKs to measure the sensitivity of the human visual system for perceiving coherent motion, and also to infer how motion information is integrated to perceive global motion under different viewing conditions [16]. We used two-frame RDKs as an example of a long-range motion stimulus. The second stimulus type are moving gratings or plaids. These stimuli have been used to study many perceptual phenomena. For example, when randomly orientated lines or grating elements drift behind apertures, the perceived direction of motion is heavily biased by the orientation of the lines/gratings, as well as by the shape and contrast of the apertures [17, 18, 19]. Multiple-aperture stimuli have also recently been used to study coherent motion perception with short-range motion stimulus [20, 21]. For both types of stimuli we compared the model predictions with human performance across various experimental conditions. 2 2 Hierarchical Model for Motion Estimation Our hierarchical model represents a motion field using a graph G = (V, E), which has L + 1 hierarchical levels, i.e., V = ? 0 ? ... ? ? l ? ... ? ? L . The level l has a set of nodes ? l = {? l (i, j), i = 1 . . . , Ml , j = 1 . . . , Nl }, forming a 2D lattice indexed by (i, j). More specifically, we start from the pixel lattice and construct the hierarchy as follows. The nodes {? 0 (i, j)} at the 0th level correspond to the pixel position {x|x = (i, j)} of the image lattice. We recursively add higher levels with nodes ? l (l = 1, ..., L). The level l lattice decreases by a factor of 2 along each coordinate direction from level l ? 1. The edges E of the graph connect nodes at each level of the hierarchy to nodes in the neighboring levels. Specifically, edges connect node ? l (i, j) at level l to a set of child nodes Chl (i, j) = {? l?1 (i0 , j 0 )} at level l ? 1 satisfying 2i ? d ? i0 ? 2i + d, 2j ? d ? j 0 ? 2j + d. Here d is a parameter controlling how many neighboring nodes in a level share child nodes. Figure 1 illustrates the graph structure of this hierarchical model in the 1-D case and with d = 2. Note that our graph G contains closed loops due to sharing of child nodes. To apply the model to motion estimation, we define state variable ul (i, j) at each node to represent the motion, and connect the 0th level nodes to two consecutive image frames, D = (It (x), It+1 (x)). The problem of motion estimation is to estimate the 2D motion field u(x) at time t for every pixel site x from input D. For simplicity, we use uli to denote the motion instead of ul (i, j) in the following sections. 2.1 Model formulation l l We define a probability distribution over the motion field U = {uli }L l=0 and u = {ui }on the graph G conditioned on the input image pair D: " #! L?1 X 1 0 l l l+1 P (U |D) = exp ? Ed (D, u ) + Eu (u , u ) (1) Z l=0 where Ed is the data term for the motion based on local image cues and Eul are hierarchical priors on the motion which impose slow and smoothness constraints at different levels. Energy terms Ed , {Eul } are defined using L1 norms to encourage robustness [22]. This robust norm helps deal with the measurement noise that often occur at motion boundary and to prevent over-smoothing at the higher levels. The details of two energy function terms are described as follows: 1) The Data Term Ed The data energy term is defined only at the bottom level of the hierarchy. It is specified in terms of the L1 norm between local image intensity values from adjacent frames. More precisely: X  (2) ||It (xi ) ? It+1 (xi + u0i )||L1 + ?||u0i ||L1 Ed (D, u0 ) = i where the first term defines a difference measure between two measurements centered at xi in It and centered at xi + u0i in It+1 respectively. We choose to use pixel values only here. The second term imposes a slowness prior on the motion which is weighted by the coefficient ?. Note that the first term is a matching term that computes the similarity between It (x) and It+1 (x + u) given any displacement u. These similarity scores at x gives confidence for different local motion hypotheses: higher similarity means the motion is more likely while lower means it is less likely. 2) The Hierarchical Prior {Eul } We define a hierarchical prior on the slowness and spatial smoothness of motion fields. The first term of this prior is expressed by energy terms between nodes at different levels of the hierarchy and enforces a smoothness preference for their states u ? that the motion of a child node is similar to the motion of its parent. We use the robust L1 norm in the energy terms so that the violation of that consistency constraint will be penalized moderately. This imposes weak smoothness on the motion field and allows abrupt change on motion boundaries. The second term is a L1 norm of motion velocities that encourages the slowness. 3 Figure 2: An illustration of our inference procedure. Left top panel: the original hierarchical graph with loops. Left bottom panel: the bottom-up process proceeds on a tree graph with multiple copies of nodes (connected by solid lines) which relaxes the problem. The top-down process enforces the consistency constraints between copies of each node (denoted by dash line connection). Right panel: An example of the inference procedure on two street scene frames. We show the estimates ? ) (bottom-up) and E(U ) (top-down). The motions are color-coded and also from minimizing E(U displayed by arrows. To be specific, the energy function Eu (ul , ul+1 ) is defined to be: ? ? X X ? ?, Eul (ul , ul+1 ) = ?(l) ||ul+1 ? ulj ||L1 + ?||ul+1 i ||L1 i i??l+1 (3) j?Chl+1 (i) where ?(l) is the weight parameter for the energy terms at the lth level and ? controls the relative weight of the slowness prior. Note that our hierarchical smoothness prior differs from conventional smoothness constraints, e.g., [10], because they impose smoothness ?sideways? between neighboring pixels at the same resolution level, which requires that the motion is similar between neighboring sites at the pixel level only. Imposing longer range interactions sideways becomes problematic as it leads to Markov Random Field (MRF) models with a large number of edges. This structure makes it difficult to do inference using standard techniques like belief propagation and max-flow/min-cut. By contrast, we impose smoothness by requiring that child nodes have similar motions to their parent nodes. This ?hierarchical? formulation enables us to impose smoothness interactions at different hierarchy levels while inference can be done efficiently by exploiting the hierarchy. 2.2 Motion Estimation ? = arg maxU P (U |D), We estimate the motion field by computing the most probable motion U where P (U |D) was defined as a Gibbs distribution in equation (1). Performing inference on this model is challenging since the energy is defined over a hierarchical graph structure with many closed loops, the state variables U are continuous-valued, and the energy function is non-convex. Our strategy is to convert this into a discrete optimization problem by quantizing the motion state space. For example, we estimate the motion at an integer-valued resolution if the accuracy is sufficient for certain experimental settings. Given a discrete state space, our algorithm involves bottomup and top-down processing and is sketched in Figure 2. The algorithm is designed to be parallelizable and to only require computations between neighboring nodes. This is desirable for biological plausibility but also has the practical advantage that we can implement the algorithm using GPU type architectures which enables fast convergence. We describe our inference algorithm in detail as follows. i) Bottom-up Pass. We first approximate the hierarchial graph with a tree-structured model by making multiple copies of child nodes such that each child node has a single parent (see [23]). This enables us to perform exact inference on the relaxed model using dynamic programming. More ? ) recursively by exploiting the tree specifically, we compute an approximate energy function E(U 4 structure: ? l+1 ) = E(u i X l ? l min[Eul (ul+1 i , uj ) + E(uj )] j?Chl+1 (i) ulj ? 0 ) at the bottom level is the data energy Ed (u0 ; D). At the top level L we compute the where E(u j j ? L ). states (? uL ) which minimize E(u i i ii) Top-down Pass. Given the top-level motion (? uL i ), we then compute the optimal motion configuration for other levels using the following top-down procedure. The top-down pass enforces ? so that the consistency constraints, relaxed earlier on the recursively-computed energy function E, all copies of each node have the same optimal state. We minimize the following energy function recursively for each node: X l ? l ? lj = arg min[ u Eul (? ul+1 i ; uj ) + E(uj )] ulj i?P al (j) where P al (j) is the set of parents of level-l node j. In the top-down pass, the spatial smoothness is imposed to the motion estimates at higher levels which provide context information to disambiguate the motion estimated at lower levels. The intuition for this two-pass inference algorithm is that the motion estimates of the lower level nodes are typically more ambiguous than the motion estimates of the higher level nodes because the higher levels are able to integrate information from larger number of nodes at lower levels (although some information is lost due to the coarse representation of motion field). Hence the estimates from the higher-level nodes are usually less noisy and can be used to give ?context? to resolve the ambiguities of the lower level nodes. From another perspective, this can be thought of as a messagepassing type algorithm which uses a specific scheduling scheme [24]. 3 Experiments with random dot kinematograms 3.1 The stimuli and simulation procedures Random dot kinematogram (RDK) stimuli consist of two image frames with N dots in each frame [1, 16, 6]. As shown in figure (3), the dots in the first frame are located at random positions. A proportion CN of dots (the signal dots) are moved coherently to the second frame with a translational motion. The remaining (1 ? C)N dots (the noise dots) are moved to random positions in the second frame. The displacement of signal dots are large between the two frames. As a result, the two-frame RDK stimuli are typically considered as an example of long-range motion. The difficulty of perceiving coherent motion in RDK stimuli is due to the large correspondence uncertainty introduced by the noise dots as shown in rightmost panel in figure (3). Figure 3: The left three panels show coherent stimuli with N = 20, C = 0.1, N = 20, C = 0.5 and N = 20, C = 1.0 respectively. The closed and open circles denote dots in the first and second frame respectively. The arrows show the motion of those dots which are moving coherently. Correspondence noise is illustrated by the rightmost panel showing that a dot in the first frame has many candidate matches in the second frame. Barlow and Tripathy [16] used RDK stimuli to investigate how dot density can affect human performance in a global motion discrimination task. They found that human performance (measured by the coherence threshold) vary little with dot density. We tested our model on the same task to judge 5 Figure 4: Estimated motion fields for random dot kinematograms. First row: 50 dots in the RDK stimulus; Second row: 100 dots in the RDK stimulus; Column-wise, coherence ratio C = 0.0, 0.3, 0.6, 0.9, respectively. The arrows indicate the motion estimated for each dot. the global motion direction using RDK motion stimulus as the input image. We applied our model to estimate motion fields and used the average velocity to indicate the global motion direction (to the left or to the right). We ran 500 trials for each coherence ratio condition. The dot number varies with N = 40, 80, 100, 200, 400, 800 respectively, corresponding to a wide range of dot densities. The model performance was computed for each coherence ratio to fit psychometric functions and to find the coherence threshold at which model performance can reach 75% accuracy. 3.2 The Results Figure (4) shows examples of the estimated motion field for various values of dot number N and coherence ratio C. The model outputs provide visually coherent motion estimates when the coherence ratio was greater than 0.3, which is consistent with human perception. With the increase of coherence ratio, the estimated motion flow appears to be more coherent. To further compare with human performance [16], we examined whether model performance can be affected by dot density in the RDK display. The right plot in figure (5) shows the model performance as a function of the coherence ratio. The coherence threshold, using the criterion of 75% accuracy, showed that model performance varied little with the increase of dot density, which is consistent with human performance reported in psychophysical experiments [16, 6]. 4 Experiments with multi-aperture stimuli 4.1 The two types of stimulus The multiple-aperture stimulus consisted of a dense set of spatially isolated elements. Two types of elements were used in our simulations: (i) drifting sine-wave gratings with random orientation, and (ii) plaids which includes two gratings with orthogonal orientations. Each element was displayed through a stationary Gaussian window. Figure (6) shows examples of these two types of stimuli. The grating elements are of form Pi (~x, t) = G(~x ? ~xi , ?)F (~x ? ~xi ? ~vi t) where ~xi denotes the center of the element, and F (.) represents a grating , F (x, y) = sin(f x sin(?i )+f y cos(?i )), where f is the fixed spatial frequency and ?i is the orientation of the grating. PN The grating stimulus is I(~x, t) = i=1 Pi (~x, t), where N is the number of elements (which is kept constant). For the CN signal gratings, the motion ~vi was set to a fixed value ~v . For the (1 ? C)N noise gratings, we set |~vi | = |~v | and the direction of ~vi was sampled from a uniform distribution. The grating orientation angles ?i were sampled from a uniform distribution also. 6 Coherence Ratio Threshold 0.04 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0 40 80 100 200 N 400 800 Figure 5: Left panel: Figure 2 in [16] showing that the coherence ratio threshold varies very little with dot density. Right panel: Simulations of our model show a similar trend. N =40, 80, 100, 200, 400 and 800. Figure 6: Multi-aperture gratings and plaids. Left column: sample stimuli. Right column: stimuli with the local drifting velocity of each element indicated by arrows. The stimulus details are shown in the magnified windows at the upper right corner of each image. The plaid elements combine two gratings with orthogonal orientations (each grating has the same speed but can have a different motion direction). This leads to plaid element Qi (~x, t) = G(~x ? ~xi , ?){F1 (~x ? ~xi ? ~vi,1 t) + F2 (~x ? ~xi ? ~vi,2 t), where F1 (x, y) = sin(f x sin ?i + f y cos ?i ) and F2 (x, y) = sin(?f x cos ?i + f y sin ?i ). PN The plaid stimulus is I(~x, t) = i=1 Qi (~x, t). For the CN signal plaids, the motions ~vi,1 , ~vi,2 were set to a fixed ~v . For the (1 ? C)N noise plaids, the directions of ~vi,1 , ~vi,2 were randomly assigned, but their magnitude |~v | was fixed. 7 0.9 0.8 Accuracy 0.7 0.6 0.5 0.4 0.3 gratings plaids 0.2 0.1 0 0.02 0.04 0.06 Coherence Ratio 0.08 0.1 Figure 7: Left two panels: Estimated motion fields of grating and plaids stimuli. Rightmost panel: Psychometric functions of gratings and plaids stimuli. 4.2 Simulation procedures and results The left two panels in Figure (7) show the estimated motion fields for the two types of stimulus we studied with the same coherence ratios 0.7. Plaids stimuli produce more coherent estimated motion field than grating stimuli, which is understandable. because they have less ambiguous local motion cues. We tested our model in an 8-direction discrimination task for estimating global motion direction [20]. The model used raw images frames as the input. We ran 300 trials for each stimulus type, and used the direction of the average motion to predict the global motion direction. The prediction accuracy ? i.e. the number of times our model predicted the correct motion direction from 8 alternatives ? was calculated at different coherence ratio levels. This difference between gratings and plaids is shown in the rightmost panel of Figure (7), where the psychometric function of plaids stimuli is always above that of grating stimuli, indicating better performance. These simulation results of our model are consistent with the psychophysics experiments in [20]. 5 Discussion In this paper, we proposed a unified single-system framework that is capable of dealing with both short-range and long-range motion. It differs from traditional motion energy models because it does not use spatiotemporal filtering. Note that it was shown in [6] that motion energy models are not well suited to the long-range motion stimuli studied in this paper. The local ambiguities of motion are resolved by a novel hierarchical prior which combines slowness and smoothness at a range of different scales. Our model accounts well for human perception of both short-range and long-range motion using the two standard stimulus types (RDKs and gratings). The hierarchical structure of our model is partly motivated by known properties of cortical organization. It also has the computational motivation of being able to represent prior knowledge about motion at different scales and to allow efficient computation. Acknowledgments This research was supported by NSF grants IIS-0917141, 613563 to AY and BCS-0843880 to HL. We thank Alan Lee and George Papandreou for helpful discussions. References [1] O. Braddick. A short-range process in apparent motion. Vision Research. 14, 519-529. 1974. [2] Z. Lu, and G. Sperling. Three-systems theory of human visual motion perception: review and update. Journal of the Optical Society of America. A. 18, 2331-2369. 2001. 8 [3] L. M. Vaina, and S. Soloviev. First-order and second-order motion: neurological evidence for neuroanatomically distinct systems. Progress in Brain Research. 144, 197-212. 2004. [4] T.S. Lee and D.B. Mumford. Hierarchical Bayesian inference in the visual cortex. JOSA A, Vol. 20, Issue 7, pp. 1434-1448. 2003. [5] A.L. Yuille and N.M. Grzywacz, A computational theory for the perception of coherent visual motion. Nature 333 pp. 71-74. 1988. [6] H. Lu and A.L. Yuille. Ideal observers for detecting motion: Correspondence noise. NIPS, 2006. [7] M. R. Luettgen, W. C. Karl and A. S. Willsky. Efficient Multiscale Regularization with Applications to the Computation of Optical Flow. IEEE Transactions on image processing. Vol. 3, pp. 41-64. 1993. [8] P. Cavanagh. Short-range vs long-range motion: not a valid distinction. 5(4), pp 303-309. 1991. [9] S. Grossberg, and M. E. Rudd. Cortical dynamics of visual motion perception: short-range and long-range apparent motion. Psychological Review. 99(1), pp 78-121. 1992. [10] B.K.P. Horn and B.G. Schunck. Determining Optical Flow. Artificial Intelligence. 17(1-3), pp 185-203. 1981. [11] Y. Weiss, E.P. Simoncelli, and E.H. Adelson. Motion illusions as optimal percepts. Nature Neuroscience, 5(6):598-604, Jun 2002. [12] A. A. Stocker and E. P. Simoncelli. Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience, vol.9(4), pp. 578?585, Apr 2006. [13] S. Roth and M. J. Black: On the spatial statistics of optical flow. International Journal of Computer Vision, 74(1):33-50, August 2007. [14] P. Anandan. A computational framework and an algorithm for the measurement of visual motion. Int. Journal. Computer Vision. 2. pp 283-310. 1989. [15] K. H. Britten, M. N. Shadlen, W. T. Newsom and J. A. Movshon. The analysis of visual motion: a comparison of neuronal and psychophysical performance. Journal of Neuroscience. 12(12), 4745-4765. 1992 [16] H. Barlow, and S.P. Tripathy. Correspondence noise and signal pooling in the detection of coherent visual motion. Journal of Neuroscience, 17(20), 7954-7966. 1997. [17] E. Mingolla, J.T. Todd, and J.F. Norman. The perception of globally coherent motion. Vision Research, 32(6), 1015-1031. 1992. [18] J. Lorenceau, and M. Shiffrar. The influence of terminators on motion integration across space. Vision Research, 32(2), 263-273. 1992. [19] T. Takeuchi. Effect of contrast on the perception of moving multiple Gabor patterns. Vision research, 38(20), 3069-3082. 1998. [20] K. Amano, M. Edwards, D. R. Badcock and S. Nishida. Adaptive pooling of visual motion signals by the human visual system revealed with a novel multi-element stimulus. Journal of Vision, 9(3(4)), 1-25. 2009. [21] A. Lee and H. Lu. A comparison of global motion perception using a multiple-aperture stimulus. Journal of Vision. 10(4), 9. 2010. [22] M. Black and P. Anandan. The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. CVIU 63(1), 1996. [23] A. Choi, M. Chavira and A. Darwiche. A Scheme for Generating Upper Bounds in Bayesian Networks. UAI, 2007. [24] J. Pearl. Probabilistic Reasoning in Intelligent Systems: networks of plausible inference, 1988 9
3910 |@word trial:2 norm:5 proportion:1 open:1 simulation:5 solid:1 recursively:4 configuration:1 contains:1 score:1 rightmost:4 contextual:1 gpu:1 readily:1 shape:1 enables:4 designed:1 plot:1 update:1 discrimination:2 stationary:1 cue:2 v:1 intelligence:1 short:14 coarse:2 detecting:1 node:34 location:1 preference:1 along:1 consists:1 combine:2 behavioral:1 darwiche:1 multi:4 brain:1 globally:1 resolve:1 little:3 window:2 perceives:1 becomes:1 estimating:1 panel:14 unified:4 magnified:1 every:1 control:1 grant:1 appear:1 local:7 todd:1 black:2 studied:2 examined:1 challenging:1 co:3 range:33 grossberg:1 practical:1 responsible:1 enforces:3 acknowledgment:1 horn:1 lost:1 implement:1 differs:4 illusion:1 procedure:5 displacement:7 thought:1 gabor:1 matching:2 confidence:1 scheduling:1 context:2 influence:1 conventional:1 imposed:1 center:1 roth:1 attention:1 convex:1 resolution:2 simplicity:1 identifying:1 abrupt:1 perceive:4 vaina:1 coordinate:1 grzywacz:1 hierarchy:6 controlling:1 heavily:1 exact:1 programming:1 us:3 hypothesis:4 secondorder:1 element:11 velocity:3 satisfying:1 trend:1 located:1 cut:1 coarser:1 bottom:7 capture:1 region:1 connected:1 eu:2 movement:4 decrease:1 ran:2 intuition:1 ui:1 moderately:1 dynamic:4 yuille:4 baseball:7 flying:1 f2:2 basis:1 resolved:2 various:2 america:1 distinct:1 fast:1 describe:1 artificial:1 apparent:3 larger:2 solve:1 valued:2 plausible:1 statistic:4 noisy:1 sequence:2 advantage:1 quantizing:1 propose:1 interaction:3 neighboring:5 loop:3 moved:2 everyday:1 los:4 exploiting:2 parent:6 chl:3 convergence:1 produce:1 generating:1 object:2 help:1 develop:1 stat:3 measured:1 progress:1 edward:1 grating:22 predicted:1 involves:1 judge:1 indicate:2 direction:12 plaid:14 correct:1 centered:2 human:20 viewing:1 argued:1 require:2 f1:2 probable:1 biological:1 effortlessly:1 considered:1 exp:1 visually:1 maxu:1 predict:1 vary:1 consecutive:1 perceived:1 estimation:6 sideways:2 weighted:1 gaussian:1 always:1 pn:2 coarsen:1 likelihood:3 uli:2 contrast:4 helpful:1 inference:11 chavira:1 i0:2 integrated:1 lj:1 typically:2 selective:1 pixel:6 arg:2 sketched:1 flexible:2 orientation:6 denoted:1 translational:1 issue:1 proposes:1 spatial:6 smoothing:1 psychophysics:1 integration:1 field:16 construct:1 u0i:3 represents:3 adelson:1 stimulus:41 piecewise:1 intelligent:1 employ:1 randomly:3 simultaneously:2 connects:1 detection:1 organization:1 highly:1 investigate:1 violation:1 nl:1 behind:1 stocker:1 edge:3 encourage:1 capable:1 orthogonal:2 conduct:1 tree:4 indexed:1 circle:1 isolated:1 psychological:1 column:3 modeling:1 earlier:1 papandreou:1 lattice:4 loopy:1 uniform:2 shuang:1 reported:1 connect:3 varies:2 spatiotemporal:1 combined:1 density:6 international:1 sensitivity:1 lee:3 probabilistic:1 ambiguity:2 luettgen:1 choose:1 corner:1 account:3 potential:1 includes:1 coefficient:1 int:1 vi:10 sine:1 observer:1 closed:3 analyze:1 start:1 wave:1 hongjing:2 minimize:2 accuracy:5 takeuchi:1 characteristic:1 efficiently:1 percept:1 judgment:1 correspond:1 weak:1 raw:1 bayesian:2 lu:5 explain:1 parallelizable:1 reach:1 sharing:1 ed:6 energy:16 frequency:1 involved:1 pp:8 rdk:8 josa:1 sampled:2 color:1 knowledge:1 electrophysiological:1 organized:1 appears:1 higher:7 wei:1 formulation:2 done:1 nonlinear:1 multiscale:1 propagation:1 defines:1 indicated:1 effect:2 requiring:1 consisted:1 barlow:2 norman:1 adequately:1 hence:1 regularization:1 assigned:1 spatially:1 illustrated:2 eg:1 deal:2 adjacent:1 sin:6 tripathy:2 encourages:1 ambiguous:3 criterion:1 ay:1 demonstrate:1 motion:134 l1:8 reasoning:1 image:12 wise:1 novel:2 recently:1 functional:1 he:1 measurement:4 imposing:2 gibbs:1 smoothness:13 consistency:3 dot:29 moving:3 similarity:3 longer:1 cortex:1 add:1 showed:1 perspective:2 driven:1 scenario:1 slowness:7 certain:1 life:1 greater:1 relaxed:2 impose:4 george:1 anandan:2 signal:7 u0:2 ii:3 multiple:12 simoncelli:2 desirable:1 infer:1 bcs:1 alan:2 smooth:1 faster:2 match:1 plausibility:1 long:15 coded:1 qi:2 prediction:2 involving:1 mrf:1 vision:12 expectation:1 sometimes:1 represent:3 whereas:1 fine:1 biased:1 pooling:2 flow:7 integer:1 ideal:1 revealed:1 relaxes:1 affect:1 fit:1 psychology:2 architecture:1 cn:3 angeles:4 whether:1 motivated:2 ul:12 movshon:1 mingolla:1 compositional:1 locally:1 problematic:1 nsf:1 nishida:1 estimated:8 neuroscience:4 discrete:2 vol:3 affected:1 key:1 threshold:5 prevent:1 kept:1 luminance:1 graph:14 convert:1 angle:1 uncertainty:2 wu:1 coherence:15 rudd:1 bound:1 dash:1 display:1 correspondence:4 activity:1 occur:1 constraint:9 precisely:1 scene:3 ucla:8 aspect:1 speed:4 min:3 performing:1 optical:4 relatively:1 department:4 structured:2 ball:1 smaller:1 across:2 making:1 hl:1 equation:1 sperling:2 mechanism:1 cavanagh:1 apply:2 hierarchical:22 alternative:1 encounter:1 robustness:1 drifting:2 existence:1 original:1 top:10 remaining:1 include:1 denotes:1 graphical:1 hierarchial:1 uj:4 society:1 psychophysical:5 move:3 coherently:3 mumford:1 strategy:3 parametric:1 traditional:2 separate:1 link:1 thank:1 braddick:2 street:1 trivial:1 willsky:1 modeled:1 illustration:2 ratio:12 minimizing:1 difficult:1 understandable:1 perform:1 upper:2 neuron:1 markov:1 displayed:2 frame:17 orientated:1 varied:1 august:1 drift:1 intensity:1 introduced:1 pair:1 required:1 specified:1 connection:1 coherent:10 distinction:1 pearl:1 nip:1 able:3 proceeds:1 usually:1 perception:18 pattern:2 max:1 belief:1 badcock:1 difficulty:2 rely:1 scheme:2 jun:1 britten:1 kinematograms:4 prior:18 review:2 determining:1 relative:1 filtering:1 xuming:1 integrate:1 sufficient:1 consistent:3 imposes:4 shadlen:1 share:1 pi:2 row:2 karl:1 penalized:1 supported:1 copy:4 allow:2 wide:1 boundary:2 dimension:1 calculated:1 cortical:2 valid:1 computes:1 commonly:2 qualitatively:1 adaptive:1 transaction:1 ulj:3 approximate:2 aperture:8 clique:1 ml:1 global:7 dealing:1 uai:1 assumed:1 xi:10 bottomup:1 continuous:1 disambiguate:1 nature:3 robust:3 ca:4 messagepassing:1 complex:2 terminator:1 apr:1 hierarchically:1 dense:1 amano:1 arrow:4 motivation:1 noise:10 child:9 body:2 neuronal:2 site:2 psychometric:3 slow:1 position:3 candidate:1 perceptual:1 third:3 down:7 choi:1 specific:2 showing:2 physiological:1 evidence:1 consist:1 texture:1 magnitude:1 illustrates:1 conditioned:1 cviu:1 suited:1 depicted:1 likely:2 forming:1 visual:12 schunck:1 expressed:2 tracking:2 neurological:1 lth:1 quantifying:1 change:1 specifically:3 perceiving:4 called:1 pas:5 partly:1 experimental:2 player:4 indicating:1 eul:6 tested:4 phenomenon:3
3,214
3,911
Link Discovery using Graph Feature Tracking Emile Richard ENS Cachan - CMLA & MilleMercis, France [email protected] Nicolas Baskiotis ENS Cachan - CMLA [email protected] Theodoros Evgeniou Technology Management and Decision Sciences, INSEAD Bd de Constance, Fontainebleau 77300, France [email protected] Nicolas Vayatis ENS Cachan & UniverSud - CMLA UMR CNRS 8536, France [email protected] Abstract We consider the problem of discovering links of an evolving undirected graph given a series of past snapshots of that graph. The graph is observed through the time sequence of its adjacency matrix and only the presence of edges is observed. The absence of an edge on a certain snapshot cannot be distinguished from a missing entry in the adjacency matrix. Additional information can be provided by examining the dynamics of the graph through a set of topological features, such as the degrees of the vertices. We develop a novel methodology by building on both static matrix completion methods and the estimation of the future state of relevant graph features. Our procedure relies on the formulation of an optimization problem which can be approximately solved by a fast alternating linearized algorithm whose properties are examined. We show experiments with both simulated and real data which reveal the interest of our methodology. 1 Introduction The prediction of the future state of an evolving graph is a challenge of interest in many applications such as predicting hyperlinks of webpages [16], finding protein-protein interactions [7], studying social networks [9], as well as collaborative filtering and recommendations [6]. Link prediction can also be seen as a special case of matrix completion where the goal is to estimate the missing entries of the adjacency matrix of the graph where the entries can be only ?0s? and ?1s?. Matrix completion became popular after the Netflix Challenge and has been extensively studied on both theoretical and algorithmic aspects [15]. In this paper we consider a special case of predicting the evolution of a graph, where we only predict the new edges given a fixed set of vertices of an undirected graph by using the dynamics of the graph over time. Most of the existing methods in matrix completion assume that weights over the entries (i.e. the edges of the graph, e.g. scores in movie recommendation applications) are observed [3]. These weights provide a richer information than the binary case (existence or absence of a link). Consider for instance the issue of link prediction in recommender systems. In that case, we consider a bipartite graph for which the vertices represent products and users, and the edges connect users with the products they have purchased in the past. The setup we consider in the present paper corresponds to 1 the binary case where we only observe purchase data, say the presence of a link in the graph, without any score or feedback on the product for a given user. Hence, we will deal here with the situation where the components of snapshots of the adjacency matrix only consist of ?1s? and missing values. Moreover, link prediction methods typically use only one snapshot of the graph?s adjacency matrix the most recent one - to predict its missing entries [9], or rely on latent variables providing semantic information for each vertex [11]. Since these methods do not use any information over time, they can be called static methods. Static methods are based on the heuristic that some topological features, such as the degree, the clustering coefficient, or the length of the paths, follow specific distributions. However, information about how the links of the graph and its topological features have been evolving over time may also be useful to predict future links. In the example of recommender systems, knowing that a particular product has been purchased by increasingly more people in a short time window provides useful information about the type of the recommendations to be made in the next period. The main idea underlying our work lies in the observation that a few graph features can capture the dynamics of the graph evolution and provide information for predicting future links. The purpose of the paper is to present a procedure which exploits the dynamics of the evolution of the graph to find unrevealed links in the graph. The main idea is to learn over time the evolution of well-chosen local features (at the level of the vertices) of the graph and then, use the predicted value of these features on the next time period to discover the missing links. Our approach is related to two theoretical streams of research: matrix completion and diffusion models. In the latter only the dynamics over time of the degree of a particular vertex of the graph are modeled - the diffusion of the product corresponding to that vertex for example [17, 14]. Beyond the large number of static matrix completion methods, only a few methods have been developed that combine static and dynamic information mainly using parametric methods ? see [4] for a survey. For example, [13] embeds graph vertices on a latent space and use either a Markov model or a Gaussian one to track the position of the vertices in this space; [10] uses a probabilistic model of the time interval between the appearance of two edges or subgraphs to predict future edges or subgraphs. However, to the best of our knowledge, there has not been any regularization based method for this problem, which we consider in this paper. The setup of dynamic feature-based matrix completion is presented in Section 2. In Section 3, we develop a fast linearized algorithm for efficient link prediction. We then discuss the use and estimation of relevant features within this regularization approach in Section 4. Eventually, numerical experiments on synthetic and real data sets are depicted in Section 5. 2 Dynamic feature-based matrix completion Setup. We consider a sequence of T undirected graphs with n vertices and n ? n binary adjacency matrices At , t ? {1, 2, ..., T } where for each t the edges of the graph are also contained in the graph at time t + 1. Given At , t ? {1, 2, ..., T } the goal is to predict the edges of the graph that are most likely to appear at time T + 1, that is, the most likely non-zero elements of the binary adjacency matrix AT +1 . To this purpose we want to learn an n ? n real-valued matrix S whose elements indicate how likely it is that there is a non-zero value at the corresponding position of matrix AT +1 . The edges that we predict to be the most likely ones at time T + 1 are the ones corresponding to the largest values in S. We assume that certain features of matrices At evolve over time smoothly. Such an assumption is necessary to allow learnability of the evolution of At over time. For simplicity we consider a linear feature map f : At 7? Ft where Ft is an n ? k matrix of the form Ft = At ?, with ? an n ? k matrix of features. Various feature maps, possibly nonlinear, can be used. We discuss an example of such features ? and a way to predict FT +1 given past values of the feature map F1 , F2 , ..., FT in Section 4 ? but other features or prediction methods can be used in combination with the main part of the proposed approach. In the proposed method discussed in Section 3 we assume for now that we already have an estimate of FT +1 . An optimization problem. The procedure we propose for link prediction is based on the assumption that the dynamics of graph features also drive the discovery of the location of new links. Given the last adjacency matrix AT , a set of features ?, and an estimate Fb of FT +1 based on the sequence 2 of adjacency matrices At , t ? {1, 2, ..., T }, we want to find a matrix S which fulfills the following requirements: ? S has low rank - this is a standard assumption in matrix completion problems [15]. ? S is close to the last adjacency matrix AT - the distance between these two matrices will provide a proxy for the training error. ? The values of the feature map at S and AT +1 are similar. p For any matrix M , we denote by kM kF = Tr(M 0 M ) , the Frobenius norm of M , with M 0 being the transpose of M and the trace operator Tr(N Pn ) computes the sum of the diagonal elements of the square matrix N . We also define kM k? = k=1 ?k (M ) , the nuclear norm of a square matrix M of size n ? n, where ?k (M ) denotes the k-th largest singular value of M . We recall that a singular value of matrix M corresponds to the square root of an eigenvalue of M 0 M ordered decreasingly. The proposed optimization problem for feature-based matrix completion is then: 1 1 . with L(S, ?, ?) = ? kSk? + kS ? AT k2F + ?kS? ? Fbk2F , (1) 2 2 and where ? and ? are positive regularization parameters. Each term of the functional L reflects the aforementioned requirements for the desired matrix S. In the case where ? = 0, we do not use information about the dynamics of the graph. The minimizer of L corresponds to the singular value thresholding approach developed in [2], which is therefore a special case of (1). Note that a key difference between link prediction and matrix completion is that in (1) the training error uses all entries of the adjacency matrix while in the case of matrix completion only the known entries (in our case the ?1s?) are used. We now discuss an efficient optimization algorithm for (1), the main part of this work. min L(S, ?, ?) , S 3 An algorithm for link discovery Solving (1) is computationally slow. We adapt the fast linearization method developed in [5] to our problem, which attains an optimal iteration complexity when using only first order information. Here, the functional L(S, ?, ?) is continuous and convex but not differentiable with respect to S. We propose to convert the minimization of the target functional L(S, ?, ?) into a tractable problem through the following steps: 1. Variable splitting - Set: g(S, ? ) = ? kSk? and h(S, ?) = 1 1 kS ? AT k2F + ?kS? ? Fbk2F . 2 2 Denote by S, Se two n ? n matrices. Then, the optimization problem (1) is equivalent to: e , min L(S, S) subject to S ? Se = 0 . (2) e S,S . e = e ?). where L(S, S) g(S, ? ) + h(S, 2. Smoothing the nuclear norm - We recall the variational formulation of the nuclear norm kSk? = maxZ {hS, Zi : ?1 (Z) ? 1}. Using the technique from [12], we can use a smooth approximation of the nuclear norm and replace g in the functional by a surrogate function g? with ? > 0 being a smoothing parameter: n o ? g? (S, ? ) = ? ? max hS, Zi ? kZk2F : ?1 (Z) ? 1 Z 2 3. Alternating minimization - We propose to minimize the functional which is continuous, differentiable and convex: . e = e ?) , L? (S, S) g? (S, ? ) + h(S, (3) e To do this, one has to minimize simultaneously the two under the constraint that S = S. functions g? and h. In order to derive the iterative algorithm based on linearized alternating 3 minimization, we introduce two strictly convex approximations of these functions which involve an additional parameter ? > 0: ? = g? (S, ? ) + h?h(S), e S ? Si e + 1 kS ? Sk e 2F G?,? (S, S) 2? e = h(S, e ?) + h?g? (S), Se ? Si + H? (S, S) 1 e 2F kS ? Sk 2? where hB, Ci = Tr(B 0 C) for two matrices B, C. The tuning of the parameter ? will be discussed e the minimizer of with the convergence results at the end of this section. We denote by mG (S) H ? e e We can G?,? (S, S) with respect to S and m (S) the minimizer of H? (S, S) with respect to S. e inspired by the now formulate an algorithm for the fast minimization of the functional L? (S, S) algorithm FALM in [5] (see Algorithm 1). Note that, in the alternating descent for the simultaneous minimization of the two functions G?,? and H? , we use an auxiliary matrix Zk . This matrix is a e The work by Ma and Goldfarb shows indeed that the linear combination of the updates for S and S. particular choice made here leads to an optimal rate of numerical convergence. Key formulas in the e and mH (S). It turns out that in our link prediction algorithm are those of the minimizers mG (S) case, these minimizers have explicit expressions which can be derived when solving the first-order optimality condition as Proposition 1 shows. Algorithm 1 - Link Discovery Algorithm Parameters: ?, ?, ? Initialization: W0 = Z1 = AT , ?1 = 0 for k = 1, 2, . . . do Sk ? mG (Zk ) and Sek ? mH (Sk ) 1 Wk ? (Sk + Sek ) 2 q 1 ?k+1 ? (1 + 1 + 4?k2 ) 2  1 ?k (Sek ? Wk?1 ) ? (Wk ? Wk?1 ) Zk+1 ? Wk + ?k+1 end for e and the singular value decomposition Sb = U b Diag(b Proposition 1 Let S? = Se ? ??h(S) ? )Vb . We also consider the singular value decomposition of S denoted by S = U Diag(??)V . We set the notation, for x > 0: ? ? ? ? ? x ? ?(x) = max , x ? ? ? ?? ? ? ?1 + ? ? We then have: e =U b Diag{?(? mG (S) ? )}Vb     ?1  1 1 mH (S) = AT ? ? U Diag min{?, 1} V + S + ? Fb?0 1+ In + ???0 . ? ? The proof can be found in the Appendix. Validity of the approximations and rates of convergence. Our strategy replaces the non-differentiable term in L by a smooth version of it. The next result offers guarantees that minimizing the surrogate function (3) provides an approximate solution of the initial problem (1). We will say that an element x is an -optimal solution of a function ?(x) if it is such that ?(x ) ? inf x ?(x) + . 4 Proposition 2 The following statements hold true: e ? We have, for any (S, S): e ? L(S, S) e ? L? (S, S) e + L? (S, S) n? . 2 e it suffices to find an /2-optimal solution of ? To find an -optimal solution of L(S, S), e L? (S, S) with ? = /n. The proof of this result can be derived straightforwardly from [5]. Moreover, following the proof of Theorem 4.3 in [5], one can show that the number ? of iterations in order to reach an -optimal solution of L? using Algorithm 1 is of the order O(1/ ). In that result of [5], an optimal choice of the parameter ? is provided as the inverse of the largest value for the Lipschitz constant of each of the gradients of g? and h. With our notations, we can easily derive here: ? = min ?/?, 1/(1 +  ??1 (?)) , where ?1 (?) is the largest singular value of ?. 4 Learning the graph features As discussed above one can use various features ? and methods to predict the n ? k matrix FT +1 given past values of the feature map F1 , F2 , ..., FT . We consider a particular case here to use in conjunction with the main algorithm in the previous section. In particular, we use as features ? the first k eigenvectors of the adjacency matrix AT . Let AT = ???0 be the orthonormal eigenvalue decomposition of AT which is symmetric. We set ? = ?(:,1:k) ??1 (:,1:k) , an n ? k matrix. Note that AT ? = ?(:,1:k) and that ?(:,1:k) is the most informative n ? k matrix for the reconstruction of AT . The suggested method aims to estimate AT +1 ? that is informative for the reconstruction of AT +1 . We denote by ?j , j ? {1, 2, ..., k} the n-dimensional feature vectors which are the columns of ?. For each feature j ? {1, 2, ..., k}, we consider the n-dimensional time series {At ?j , t = 1, . . . , T } which describes the evolution of the j-th feature over the n vertices of the graph. We now describe the procedure for learning the evolution of this j-th graph feature over time: 1. Fix an integer m < T to learn a map between m past values (At?m ?j , . . . , At?1 ?j ) and the current value of the n-dimensional vector At ?j . 2. Construct the training data for the learning step by using a sliding window of size m from time t = 1 to t = T ; we then have T ? m + 1 training data of dimension n ? m for each feature j. 3. Use ridge regression to fit the training data. 4. Estimate the j-th column of FT +1 as the predicted value for AT +1 ?j using the regression model at the ?point? (AT ?m+1 ?j , . . . , AT ?j ). Collecting the results for each j ? {1, 2, ..., k}, we obtain the estimate Fb for the matrix FT +1 used in (1). We point out that using construction with a time shift means that implicitly the relation between m consecutive values of (At?m ?j , . . . , At?1 ?j ) and the next value At ?j is stable over time (stationarity assumption). Clearly methods other than ridge regression or other ways of creating the training data can be used, which we leave for future work. 5 Experimental Results We tested the proposed method using both simulated and real data sets. As benchmarks we use the following methods: 1. Static matrix completion corresponding to ? = 0 in (1). 2. The Katz algorithm [8] considered as one of the best static link prediction methods. 5 3. The Preferential Attachment method [1] for which the score (?likelihood?) of an edge {u, v} is du dv where du and dv are the degrees of u and v. 5.1 Synthetic Data We generate sequences of graphs as follows. We first generate a sequence of T matrices Q(t) of size n ? r whose entries Qi,j (t) are increasing over time as a sigmoid function : ? ? ?? 1 t ? ?i,j ?? Qi,j (t) = ?1 + erf ? q 2 2? 2 i,j where ?i,j ? [0; T ], ?i,j ? [0; T /3] are picked uniformly for each (i, j). These matrices provide a synthetic model for the evolution of the graph over time. We then add noise to the time dynamics as follows. For a given noise level ? ? [0, 1] we replace each entry of Qi,j (t) with probability ? with any of the other values Qi,j (s) for s picked uniformly from {1, 2, ..., T }. Having constructed the matrices Q(t), we then generate matrices S(t) = Q(t)Q(t)0 which are of rank r. We finally generate the adjacency matrix At as A(t) = 1[?;?[ (S(t)) for a threshold ?. We pick ? so that the sparsity (i.e. proportion of non-zero entries) of AT reflects the sparsity of the real data used in the next section (? 10?3 ). In the experiments, we simulated graphs with n = 1000 vertices. 5.2 Real Data Collaborative Filtering1 We can see the purchase histories of e-commerce websites as graph sequences where links are established between a user and a product when the user purchases that product. We use data from 10 months music purchase history of a major e-commerce website to evaluate our method. For our test we selected a set of 103 users and 103 products that had the highest degrees (number of sales). We split the 8.5 ? 103 edges of the graph (corresponding to purchases) into two parts following their occurrence time. We used the data of the 8 first months to predict the features at the end of the 10th month and use these features as well as the matrix at the end of the 8th month to discover the purchases during the 2 last months. 5.3 Results The results are shown in Figure 1 and Tables 1 and 2. The Area Under an ROC Curve (AUC) is reported. For the simulation data we report the average AUC over 10 simulation runs. From the simulation results we observe that for low rank underlying matrices, our method outperforms the rivals. The same comparative results were observed for ranks as high as 100. Our method (as well as the static low rank method based on the low rank hypothesis) however fails when the rank of S(t) is high. However, even in this case our method outperforms the method of static matrix completion. The results with the real data further indicate the advantage of using information about the evolution of the graph over time. Similarly to the simulation data, the proposed method outperforms the static matrix completion one. 6 Conclusion The main contribution of this work is the formulation of a learning problem that can be used to predict the evolution of the edges of a graph over time. A regularization approach to combine both static graph information as well as information about the dynamics of the evolution of the graph over time is proposed and an optimization algorithm is developed. Despite using simple graph features 1 Notice that we are looking to discover only unobserved links and not new occurences of past links. Thus the comparaison with some popular benchmarks (as coauthorship data sets) is inappropriate. 6 0.65 AUC 0.6 0.55 0.5 0.45 20 10 15 8 10 6 4 5 0 o 2 0 i Figure 1: AUC performance of the proposed algorithm with respect to the two parameters ? and ? on simulated data. (r,?) \ Method (5,0.000) (5,0.250) (5,0.750) (500,0.000) (500,0.250) (500,0.750) Proposed Method 0.671?0.008 0.675 ? 0.009 0.519 ? 0.007 0.592 ? 0.008 0.607 ? 0.011 0.601 ? 0.010 Static 0.648 ? 0.008 0.642 ? 0.007 0.525 ? 0.005 0.587 ? 0.007 0.588 ? 0.009 0.583 ? 0.007 Pref. A. 0.627 ? 0.015 0.602 ? 0.016 0.497 ? 0.007 0.671 ? 0.010 0.649 ? 0.009 0.645 ? 0.017 Katz 0.616 ? 0.015 0.592 ? 0.016 0.491 ? 0.007 0.667 ? 0.009 0.643 ? 0.009 0.641 ? 0.017 Table 1: Simulation data. The average AUC over 10 simulation runs is reported. For each row the pair of numbers in the first column show the rank r and the noise level ?. as well as estimation of the evolution of the feature values over time, experiments indicate that the proposed optimization method improves performance relative to benchmarks. Testing, or learning, other graph features as well as other ways to model their dynamics over time may further improve performance and is part of future work. Appendix - Proof of Proposition 1 ? with respect to S: We first write the optimality condition for G?,? (S, S) e + ?S g? (S) + ?h(S) ? \? 0 1 2 3 4 0 0.568 0.626 0.638 0.569 0.569 0.1 0.584 0.684 0.678 0.646 0.556 1 e =0. (S ? S) ? 0.3 0.585 0.683 0.671 0.635 0.562 0.7 0.585 0.675 0.688 0.645 0.565 1.6 0.562 0.668 0.672 0.643 0.563 Table 2: Collaborative Filtering data; AUC for different values of ? and ?. The AUC of preferential attachment is 0.6019, and Katz reaches 0.6670 7 b the previous condition can be written: With the notations for S, ??g? (S) + S ? Sb = 0 . We now use the fact that ?g? (S) = ? U Diag(min{?, 1})V where S/? = U Diag(?)V (see [5]). This observation leads to the solution where S satisfies: b, U =U V = Vb , ? ? = ?? min{?, 1} + ?? , and which gives the first result, since there is a unique solution due to the strict convexity of the function. e with respect to Se is Similarly, the optimality condition of H? (S, S) e + ?g? (S) + ?h(S) 1 ? (S ? S) = 0 . ? Since the function h is differentiable as the sum of two quadratic terms, we have: e = Se ? AT + ?(S? e ? Fb)?0 , ?h(S) e and we can derive the optimal value for S:     ?1 1 1 mH (S) = AT ? ?g? (S) + S + ? Fb?0 1+ In + ???0 .  ? ? Acknowledgments This work was partially supported by D IGIT E? O (B E? MOL project), that authors greatly thank. References [1] A. L. Barab?asi, H. Jeong, Z. Nda, A. Schubert, and T. Vicsek. Evolution of the social network of scientific collaborations. Physica A: Statistical Mechanics and its Applications, 311(34):590?614, 2002. [2] Emmanuel J. Cand`es and Terence. Tao. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956?1982, 2008. [3] Emmanuel J. Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5), 2009. [4] Lise Getoor and Christopher P. Diehl. Link mining: a survey. SIGKDD Explorations Newsletter, 7(2):3?12, 2005. [5] Donald Goldfarb and Shiqlan Ma. Fast alternating linearization methods for minimizing the sum of two convex functions. Technical Report, Department of IEOR, Columbia University, 2009. [6] Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In Proceedings of the 8th IEEE International Conference on Data Mining (ICDM 2008), pages 263?272, 2008. [7] Hisashi Kashima, Tsuyoshi Kato, Yoshihiro Yamanishi, Masashi Sugiyama, and Koji Tsuda. Link propagation: A fast semi-supervised learning algorithm for link prediction. In Proceedings of the SIAM International Conference on Data Mining, SDM 2009, pages 1099?1110, 2009. [8] Leo Katz. A new status index derived from sociometric analysis. Psychometrika, 18(1):39?43, 1953. [9] David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. Journal of the American Society for Information Science and Technology, 58(7):1019?1031, 2007. [10] Tanya Y. Berger-Wolf Mayank Lahiri. Structure prediction in temporal networks using frequent subgraphs. IEEE Symposium on Computational Intelligence and Data Mining (CIDM), 2007. 8 [11] Kurt Miller, Thomas Griffiths, and Michael Jordan. Nonparametric latent feature models for link prediction. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1276?1284. 2009. [12] Yu Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?152, 2005. [13] Purnamrita Sarkar, Sajid Siddiqi, and Geoffrey J. Gordon. A latent space approach to dynamic embedding of cooccurrence data. In In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AI-STATS), 2007. [14] Ashish Sood, Gareth M. James, and Gerard J. Tellis. Functional regression: A new model for predicting market penetration of new products. Marketing Science, 28(1):36?51, 2009. [15] Nathan Srebro, Jason D. M. Rennie, and Tommi S. Jaakkola. Maximum-margin matrix factorization. In Lawrence K. Saul, Yair Weiss, and L?eon Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1329?1336. MIT Press, Cambridge, MA, 2005. [16] Ben Taskar, Ming-Fai Wong, Pieter Abbeel, and Daphne Koller. Link prediction in relational data. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch?olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [17] Demetrios Vakratsas, Fred M. Feinberg, Frank M. Bass, and Gurumurthy Kalyanaram. The Shape of Advertising Response Functions Revisited: A Model of Dynamic Probabilistic Thresholds. Marketing Science, 23(1):109?119, 2004. 9
3911 |@word h:2 version:1 norm:5 proportion:1 km:2 hu:1 simulation:6 linearized:3 pieter:1 decomposition:3 pick:1 tr:3 initial:1 series:2 score:3 kurt:1 past:6 existing:1 outperforms:3 current:1 com:2 si:2 gmail:1 bd:1 written:1 numerical:2 informative:2 shape:1 update:1 intelligence:2 discovering:1 website:2 selected:1 short:1 provides:2 revisited:1 location:1 theodoros:2 daphne:1 mathematical:1 constructed:1 symposium:1 combine:2 eleventh:1 introduce:1 purnamrita:1 market:1 indeed:1 cand:2 mechanic:1 sociometric:1 inspired:1 ming:1 window:2 inappropriate:1 increasing:1 psychometrika:1 provided:2 discover:3 moreover:2 underlying:2 notation:3 project:1 developed:4 finding:1 unobserved:1 guarantee:1 temporal:1 masashi:1 collecting:1 k2:1 sale:1 appear:1 positive:1 local:1 despite:1 path:1 approximately:1 sajid:1 umr:1 initialization:1 studied:1 examined:1 k:6 factorization:1 commerce:2 unique:1 acknowledgment:1 testing:1 yehuda:1 procedure:4 area:1 evolving:3 asi:1 donald:1 griffith:1 protein:2 cannot:1 close:1 operator:1 wong:1 equivalent:1 map:6 maxz:1 missing:5 williams:1 convex:5 survey:2 formulate:1 simplicity:1 splitting:1 stats:1 occurences:1 subgraphs:3 nuclear:4 orthonormal:1 embedding:1 target:1 construction:1 user:6 cmla:4 programming:1 us:2 hypothesis:1 element:4 observed:4 ft:11 taskar:1 solved:1 capture:1 culotta:1 sood:1 bass:1 highest:1 liben:1 convexity:1 complexity:1 cooccurrence:1 nesterov:1 dynamic:15 solving:2 bipartite:1 f2:2 easily:1 mh:4 various:2 leo:1 fast:6 describe:1 artificial:1 whose:3 richer:1 heuristic:1 valued:1 pref:1 say:2 rennie:1 erf:1 statistic:1 sequence:6 eigenvalue:2 differentiable:4 mg:4 advantage:1 sdm:1 propose:3 reconstruction:2 interaction:1 product:9 yoshihiro:1 fr:1 frequent:1 relevant:2 kato:1 frobenius:1 olkopf:1 webpage:1 convergence:3 requirement:2 gerard:1 comparative:1 unrevealed:1 leave:1 yamanishi:1 ben:1 derive:3 develop:2 completion:17 auxiliary:1 predicted:2 indicate:3 tommi:1 exploration:1 adjacency:13 f1:2 suffices:1 fix:1 abbeel:1 proposition:4 nda:1 strictly:1 physica:1 hold:1 considered:1 lawrence:2 algorithmic:1 predict:10 major:1 consecutive:1 nowell:1 purpose:2 estimation:3 constance:1 lip6:1 largest:4 reflects:2 minimization:6 mit:2 clearly:1 gaussian:1 aim:1 pn:1 jaakkola:1 conjunction:1 derived:3 lise:1 rank:8 likelihood:1 mainly:1 greatly:1 sigkdd:1 attains:1 minimizers:2 cnrs:1 sb:2 typically:1 relation:1 koller:1 france:3 tao:2 tsuyoshi:1 schubert:1 issue:1 aforementioned:1 denoted:1 smoothing:2 special:3 construct:1 evgeniou:2 having:1 yu:1 k2f:2 jon:1 future:7 purchase:6 report:2 gordon:1 richard:2 few:2 simultaneously:1 stationarity:1 interest:2 mining:4 feinberg:1 edge:13 necessary:1 preferential:2 koji:1 desired:1 tsuda:1 theoretical:2 instance:1 column:3 vertex:12 entry:10 examining:1 learnability:1 reported:2 straightforwardly:1 connect:1 synthetic:3 mayank:1 international:3 siam:2 probabilistic:2 terence:2 michael:1 ashish:1 management:1 possibly:1 ieor:1 creating:1 american:1 de:1 hisashi:1 wk:5 fontainebleau:1 coefficient:1 stream:1 root:1 picked:2 jason:1 netflix:1 collaborative:4 minimize:2 square:3 contribution:1 became:1 miller:1 advertising:1 drive:1 history:2 simultaneous:1 reach:2 igit:1 sebastian:1 volinsky:1 james:1 proof:4 static:12 popular:2 recall:2 knowledge:1 improves:1 supervised:1 follow:1 methodology:2 response:1 wei:1 formulation:3 marketing:2 implicit:1 christopher:1 lahiri:1 nonlinear:1 propagation:1 fai:1 reveal:1 scientific:1 building:1 validity:1 true:1 evolution:13 hence:1 regularization:4 alternating:5 symmetric:1 goldfarb:2 semantic:1 deal:1 during:1 auc:7 universud:1 ridge:2 newsletter:1 variational:1 novel:1 sigmoid:1 functional:7 sek:3 discussed:3 katz:4 cambridge:2 ai:1 tuning:1 similarly:2 sugiyama:1 had:1 stable:1 add:1 recent:1 inf:1 certain:2 binary:4 seen:1 additional:2 period:2 semi:1 sliding:1 smooth:4 technical:1 adapt:1 offer:1 vicsek:1 kzk2f:1 icdm:1 barab:1 qi:4 prediction:15 regression:4 iteration:2 represent:1 vayatis:2 want:2 interval:1 singular:7 sch:1 strict:1 subject:1 undirected:3 lafferty:1 jordan:1 integer:1 near:1 presence:2 split:1 bengio:1 hb:1 fit:1 zi:2 idea:2 knowing:1 shift:1 expression:1 useful:2 se:6 involve:1 eigenvectors:1 nonparametric:1 rival:1 extensively:1 siddiqi:1 generate:4 notice:1 track:1 write:1 key:2 threshold:2 diffusion:2 graph:43 relaxation:1 sum:3 convert:1 run:2 inverse:1 decision:1 cachan:4 appendix:2 vb:3 koren:1 topological:3 replaces:1 quadratic:1 constraint:1 kleinberg:1 aspect:1 nathan:1 min:6 optimality:3 department:1 combination:2 describes:1 increasingly:1 penetration:1 dv:2 computationally:1 discus:3 eventually:1 turn:1 tractable:1 end:4 studying:1 observe:2 occurrence:1 distinguished:1 kashima:1 yair:1 existence:1 thomas:1 denotes:1 clustering:1 tanya:1 music:1 exploit:1 eon:1 emmanuel:2 society:1 purchased:2 already:1 parametric:1 strategy:1 diagonal:1 surrogate:2 gradient:1 distance:1 link:29 thank:1 simulated:4 thrun:1 w0:1 chris:1 length:1 modeled:1 index:1 berger:1 providing:1 minimizing:2 setup:3 statement:1 frank:1 trace:1 recommender:2 observation:2 snapshot:4 markov:1 datasets:1 benchmark:3 descent:1 situation:1 relational:1 looking:1 sarkar:1 david:1 pair:1 z1:1 jeong:1 established:1 decreasingly:1 beyond:1 suggested:1 sparsity:2 challenge:2 hyperlink:1 max:2 power:1 getoor:1 rely:1 predicting:4 improve:1 movie:1 technology:2 attachment:2 columbia:1 discovery:4 kf:1 evolve:1 relative:1 ksk:3 filtering:3 srebro:1 geoffrey:1 emile:2 degree:5 proxy:1 thresholding:2 editor:3 collaboration:1 row:1 supported:1 last:3 transpose:1 allow:1 saul:2 feedback:2 dimension:1 curve:1 fred:1 fb:5 computes:1 author:1 made:2 social:3 transaction:1 approximate:1 implicitly:1 status:1 bernhard:1 yifan:1 continuous:2 latent:4 iterative:1 sk:5 table:3 learn:3 zk:3 nicolas:4 diehl:1 mol:1 schuurmans:1 du:2 bottou:1 diag:6 main:6 noise:3 en:4 roc:1 slow:1 embeds:1 fails:1 position:2 explicit:1 lie:1 formula:1 theorem:1 specific:1 consist:1 ci:1 linearization:2 margin:1 smoothly:1 depicted:1 appearance:1 likely:4 contained:1 ordered:1 tracking:1 partially:1 insead:2 recommendation:3 corresponds:3 minimizer:3 satisfies:1 relies:1 wolf:1 ma:4 gareth:1 goal:2 month:5 replace:2 absence:2 lipschitz:1 comparaison:1 uniformly:2 called:1 experimental:1 e:2 coauthorship:1 people:1 latter:1 fulfills:1 evaluate:1 baskiotis:2 tested:1
3,215
3,912
Categories and Functional Units: An Infinite Hierarchical Model for Brain Activations Danial Lashkari Ramesh Sridharan Polina Golland Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 {danial, rameshvs, polina}@csail.mit.edu Abstract We present a model that describes the structure in the responses of different brain areas to a set of stimuli in terms of stimulus categories (clusters of stimuli) and functional units (clusters of voxels). We assume that voxels within a unit respond similarly to all stimuli from the same category, and design a nonparametric hierarchical model to capture inter-subject variability among the units. The model explicitly encodes the relationship between brain activations and fMRI time courses. A variational inference algorithm derived based on the model learns categories, units, and a set of unit-category activation probabilities from data. When applied to data from an fMRI study of object recognition, the method finds meaningful and consistent clusterings of stimuli into categories and voxels into units. 1 Introduction The advent of functional neuroimaging techniques, in particular fMRI, has for the first time provided non-invasive, large-scale observations of brain processes. Functional imaging techniques allow us to directly investigate the high-level functional organization of the human brain. Functional specificity is a key aspect of this organization and can be studied along two separate dimensions: 1) which sets of stimuli or cognitive tasks are treated similarly by the brain, and 2) which areas of the brain have similar functional properties. For instance, in the studies of visual object recognition the first question defines object categories intrinsic to the visual system, while the second characterizes regions with distinct profiles of selectivity. To answer these questions, fMRI studies examine the responses of all relevant brain areas to as many stimuli as possible within the domain under study. Novel methods of analysis are needed to extract the patterns of functional specificity from the resulting high-dimensional data. Clustering is a natural choice for answering questions we pose here regarding functional specificity with respect to both stimuli and voxels. Applying clustering in the space of stimuli identifies stimuli that induce similar patterns of response and has been recently used to discover object categories from responses in the human inferior temporal cortex [1]. Applying clustering in the space of brain locations seeks voxels that show similar functional responses [2, 3, 4, 5]. We will refer to a cluster of voxels with similar responses as a functional unit. In this paper, we present a model to investigate the interactions between these two aspects of functional specificity. We make the natural assumptions that functional units are organized based on their responses to the categories of stimuli and the categories of stimuli can be characterized by the responses they induce in the units. Therefore, categories and units are interrelated and informative about each other. Our generative model simultaneously learns the specificity structure in the space of both stimuli and voxels. We use a block co-clustering framework to model the relationship between clusters of stimuli and brain locations [6]. In order to account for variability across subjects in a group study, we assume a hierarchical model where a group-level structure generates the clustering of voxels in different subjects (Fig. 1). A nonparametric prior enables the model to search the space 1 Figure 1: Co-clustering fMRI data across subjects. The first row shows a hypothetical data set of brain activations. The second row shows the same data after co-clustering, where rows and columns are re-ordered based on the membership in categories and functional units. of different numbers of clusters. Furthermore, we tailor the method specifically to brain imaging by including a model of fMRI signals [7]. Most prior work applies existing machine learning algorithms to functional neuroimaging data. In contrast, our Bayesian integration of the co-clustering model with the model of fMRI signals informs each level of the model about the uncertainties of inference in the other levels. As a result, the algorithm is better suited to handling the high levels of noise in fMRI observations. We apply our method to a group fMRI study of visual object recognition where 8 subjects are presented with 69 distinct images. The algorithm finds a clustering of the set of images into a number of categories along with a clustering of voxels in different subjects into units. We find that the learned categories and functional units are indeed meaningful and consistent. Related Work Different variants of co-clustering algorithms have found applications in biological data analysis [8, 9, 10]. Our model is closely related to the probabilistic formulations of co-clustering [11, 12] and the application of Infinite Relational Models to co-clustering [13]. Prior work in the applications of advanced machine learning techniques to fMRI has mainly focused on supervised learning, which requires prior knowledge of stimulus categories [14]. Unsupervised learning methods such as Independent Component Analysis (ICA) have also been applied to fMRI data to decompose it into a set of spatial and temporal (functional) components [15, 16]. ICA assumes an additive model for the data and allows spatially overlapping components. However, neither of these assumptions is appropriate for studying functional specificity. For instance, an fMRI response that is a weighted combination of a component selective for category A and another component selective for category B may be better described by selectivity for a new category (the union of both). We also note that Formal Concept Analysis, which is closely related to the idea of block co-clustering, has been recently applied to neural data from visual studies in monkeys [17]. 2 Model Our model consists of three main components: I. Co-clustering structure expressing the relationship between the clustering of stimuli (categories) and the clustering of brain voxels (functional units), II. Hierarchical structure expressing the variability among functional units across subjects, III. Signal model expressing the relationship between voxel activations and observed fMRI time courses. The co-clustering level is the key element of the model that encodes the interactions between stimulus categories and functional units. Due to the differences in the level of noise among subjects, we do not expect to find the same set of functional units in all subjects. We employ the structure of the Hierarchical Dirichlet Processes (HDP) [18] to account for this fact. The first two components of the model jointly explain how different brain voxels are activated by each stimulus in the experiment. The third component of the model links these binary activations to the observed fMRI time courses 2 xjis zji cs ?k,l ?j ? ?, ? ? ? ? yjit ejih aji ?ji a ?a j , ?j e ?ejh , ?jh ?j , ?j activation of voxel i in subject j to stimulus s unit membership of voxel i in subject j category membership of stimulus s activation probability of unit k to category l unit prior weight in subject j group-level unit prior weight unit HDP scale parameters category prior weight category DP scale parameters prior parameters for actviation probabilities ? fMRI signal of voxel i in subject j at time t nuisance effect h for voxel i in subject j amplitude of activation of voxel i in subject j variance reciprocal of noise for voxel i in subject j prior parameters for response amplitudes prior parameters for nuisance factors prior parameters for noise variance Figure 2: The graphical representation of our model where the set of voxel response variables (aji , ejih , ?ji ) and their corresponding prior parameters (?aj , ?ja , ?eh , ?he , ?j , ?j ) are denoted by ?ji and ?j , respectively. of voxels. Sec. 2.1 presents the hierarchical co-clustering part of the model that includes both the first and the second components above. Sec. 2.2 presents the fMRI signal model that integrates the estimation of voxel activations with the rest of the model. Sec. 2.3 outlines the variational algorithm that we employ for inference. Fig. 2 shows the graphical model for the joint distribution of the variables in the model. 2.1 Nonparametric Hierarchical Co-clustering Model Let xjis ? {0, 1} be an activation variable that indicates whether stimulus s activates voxel i in subject j. The co-clustering model describes the distribution of voxel activations xjis based on the category and the functional units to which stimulus s and voxel i belong. We assume that all voxels within functional unit k have the same probability ?k,l of being activated by a particular category l of stimuli. Let z = {zji }, (zji ? {1, 2, ? ? ? }) be the set of unit memberships of voxels and c = {cs }, (cs ? {1, 2, ? ? ? }) the set of category memberships of the stimuli. Our model of co-clustering assumes: i.i.d. (1) xjis | zji , cs , ? ? Bernoulli(?zji ,cs ). The set ? = {?k,l } of the probabilities of activation of functional units to different categories summarizes the structure in the responses of voxels to stimuli. We use the stick-breaking formulation of HDP [18] to construct an infinite hierarchical prior for voxel unit memberships: i.i.d. zji | ?j ? Mult(?j ), (2) ?j | ? i.i.d. ? Dir(??), ?|? ? GEM(?). (3) (4) T Here, GEM(?) is a distribution over infinitely long vectors ? = [?1 , ?2 , ? ? ? ] , named after Griffiths, Engen and McCloskey [19]. This distribution is defined as: ?k = v k k?1 Y i.i.d. (1 ? vk? ) , vk | ? ? Beta(1, ?), (5) k? =1 where the components of the generated vectors ? sum to one with probability 1. In subject j, voxel memberships are distributed according to subject-specific weights of functional units ?j . The weights ?j are in turn generated by a Dirichlet distribution centered around ? with a degree of variability determined by ?. Therefore, ? acts as the group-level expected value of the subjectspecific weights. With this prior over the unit memberships of voxels z, the model in principle allows an infinite number of functional units; however, for any finite set of voxels, a finite number of units is sufficient to include all voxels. We do not impose a similar hierarchical structure on the clustering of stimuli among subjects. Conceptually, we assume that stimulus categories reflect how the human brain has evolved to 3 organize the processing of stimuli within a system and are therefore identical across subjects. Even if any variability exists, it will be hard to learn such a complex structure from data since we can present relatively few stimuli in each experiment. Hence, we assume identical clustering c in the space of stimuli for all subjects, with a Dirichlet process prior: cs | ? i.i.d. ? Mult(?), ?|? ? GEM(?). (6) Finally, we construct the prior distribution for unit-category activation probabilities ?: i.i.d. ?k,l ? Beta(?1 , ?2 ). 2.2 (7) Model of fMRI Signals Functional MRI yields a noisy measure of average neuronal activation in each brain voxel at different time points. The standard linear time-invariant model of fMRI signals expresses the contribution of each stimulus by the convolution of the spike train of stimulus onsets with a hemodynamic response function (HRF) [20]. The HRF peaks at about 6-9 seconds, modeling an intrinsic delay between the underlying neural activity and the measured fMRI signal. Accordingly, measured signal yjit in voxel i of subject j at time t is modeled as: X X yjit = bjis Gst + ejih Fht + ?jit , (8) s h where Gst is the model regressor for stimulus s, Fht represents nuisance factor h, such as a baseline or a linear temporal trend, at time t and ?jit is gaussian noise. We use the simplifying assumption i.i.d. throughout that ?jit ? Normal(0, ??1 ji ). In the absence of any priors, the response bjis of voxel i to stimulus s can be estimated by solving the least squares regression problem. Unfortunately, fMRI signal does not have a meaningful scale and may vary greatly across trials and experiments. In order to use this data for inferences about brain function across subjects, sessions, and stimuli, we need to transform it into a standard and meaningful space. The binary activation variables x, introduced in the previous section, achieve this transformation by assuming that in response to any stimulus a voxel is either in an active or a non-active state, similar to [7]. If voxel i is activated by stimulus s, i.e., if xjis = 1, its response takes positive value aji that specifies the voxel-specific amplitude of response; otherwise, its response remains 0. We can write bjis = aji xjis and assume that aji represents uninteresting variability in fMRI signal. When making inference on binary activation variable xjis , we consider not only the response, but also the level of noise and responses to other stimuli. Therefore, the binary activation variables can be directly compared across different subjects, sessions, and experiments. We assume the following priors on voxel response variables:  e , ejih ? Normal ?ejh , ?jh  a a aji ? Normal+ ?j , ?j , ?ji ? Gamma (?j , ?j ) , where Normal+ defines a normal distribution constrained to only take positive values. 2.3 (9) (10) (11) Algorithm The size of common fMRI data sets and the space of hidden variables in our model makes stochastic inference methods, such as Gibbs sampling, prohibitively slow. Currently, there is no faster splitmerge-type sampling technique that can be applied to hierarchical nonparametric models [18]. We therefore choose a variational Bayesian inference scheme, which is known to yield faster algorithms. To formulate the inference for the hierarchical unit memberships, we closely follow the derivation of the Collapsed Variational HDP approximation [21]. We integrate over the subject-specific unit weights ? = {?j } and introduce a set of auxiliary variables r = {rjk } that represent the number of tables corresponding to unit (dish) k in subject (restaurant) j according to the Chinese restaurant franchise formulation of HDP [18]. Let h = {x, z, c, r, a, ?, e, ?, v, u} denote the set of all unobserved variables. Here, v = {vk } and u = {ul } are the stick breaking fractions corresponding 4 to distributions ? and ?, respectively. We approximate the posterior distribution on the hidden variables given the observed data p(h|y) by a factorizable distribution q(h). The variational method minimizes the Gibbs free energy function F[q] = E[log q(h)] ? E[log p(y, h)] where E[?] indicates expected value with respect to distribution q. We assume a distribution q of the form: " # Y Y Y Y Y Y Y q(h) = q(r|z) q(vk ) q(ul ) q(?k,l ) q(cs ) ? q(aji )q(?ji )q(zji ) q(xjis ) q(ejih ) . k l k,l s s j,i h We apply coordinate descent in the space of q(?) to minimize the free energy. Since we explicitly account for the dependency of the auxiliary variables on unit memberships in the posterior, we can derive closed form update rules for all hidden variables. Due to space constraints in this paper, we present the update rules and their derivations in the Supplementary Material. Iterative application of the update rules leads to a local minimum of the Gibbs free energy. Since variational solutions are known to be biased toward their initial configurations, the initialization phase becomes critical to the quality of the results. For initialization of the activation variables xjis , we estimate bjis in Eq. (8) using least squares regression and for each voxel normalize the estimates to values between 0 and 1 using the voxel-wise maximum and minimum. We use the estimates of b to also initialize ? and e. For memberships, we initialize q(z) by introducing the voxels one by one in random order to the collapsed Gibbs sampling scheme [18] constructed for our model with each stimulus as a separate category and the initial x assumed known. We initialize category memberships c by clustering the voxel responses across all subjects. Finally, we set the hyperparameters of the fMRI model such that they match the corresponding statistics computed by least squares regression on the data. 3 We demonstrate the performance of the model and the inference algorithm on both synthetic and real data. As a baseline algorithm for comparison, we use the Block Average Co-clustering (BAC) algorithm [6] with the Euclidean distance. First, we show that the hierarchical structure of our algorithm enables us to retrieve the cluster membership more accurately in synthetic group data. Then, we present the results of our method in an fMRI study of visual object recognition. 3.1 NBC 10 5 0 BAC 246 Results Classification Accuracy (CA) 1 0.75 0.5 0.25 0 Dataset 1 Dataset 2 Dataset 3 Dataset 4 Dataset 5 VoxelsStimuli VoxelsStimuli VoxelsStimuli VoxelsStimuli VoxelsStimuli Normalized Mutual Information (NMI) 1 0.75 0.5 0.25 0 Dataset 1 Dataset 2 Dataset 3 Dataset 4 Dataset 5 VoxelsStimuli VoxelsStimuli VoxelsStimuli VoxelsStimuli VoxelsStimuli Figure 3: Comparison between our nonparametric Bayesian co-clustering algorithm (NBC) and Block Average Co-clustering (BAC) on synthetic data. Both classiciation accuracy (CA) and noramlized mutual information (NMI) are reported. Synthetic Data We generate synthetic data from a stochastic process defined by our model with the set of parameters ? = 3, ? = 100, ? = 1, and ?1 = ?2 = 1, Nj = 1000 voxels, S = 100 stimuli, and J = 4 subjects. For the model of the fMRI signals, we use parameters that are representative of our experimental setup and the corresponding hyperparameters estimated from the data. We generate 5 data sets with these parameters; they have between 5 to 7 categories and 13 to 21 units. We apply our algorithm directly to time courses in 5 different data sets generated using the above scheme. To apply BAC to the same data sets, we need to first turn the time-courses into voxel-stimulus data. We use the least squares estimates of voxel responses (bjis ) normalized in the same way as we initialize our fMRI model. We run each algorithm 20 times with different initializations. The BAC algorithm is initialized by the result of a soft k-means clustering in the space of voxels. Our method is initialized as explained in the previous section. For BAC, we use the true number of clusters while our algorithm is always initialized with 15 clusters. We evaluate the results of clustering with respect to both voxels and stimuli by comparing clustering results with the ground truth. Since there is no consensus on the best way to compare different clusterings of the same set, here we employ two different clustering distance measures. Let P (k, k ? ) denote the fraction of data points (voxels or stimuli) assigned to cluster k in the ground truth and k ? 5 in the estimated clustering. The first measure is the so-called classification accuracy (CA), which is defined as the fraction of data points correctly assigned to the true clusters [22]. To compute this measure, we need to first match the cluster indices in our results with the true clustering. We find a one-to-one matching between the two sets of clusters by solving a bipartite graph matching problem. We define the graph such that the two sets of cluster indices represent the nodes and P (k, k ? ) represents the weight of the edge between node k and k ? . As the second measure, we use the normalized mutual information (NMI), which expresses the proportion of the entropy (information) of the ground truth clustering that is shared with the estimated clustering. We define two random variables X and Y that take values in the spaces of the true and the estimated cluster indices, respectively. Assuming a joint distribution P (X=k, Y =k ? ) = P (k, k ? ), we set N M I = I(X; Y )/H(X). Both measures take values between 0 and 1, with 1 corresponding to perfect clustering. Fig. 3 presents the clustering quality measures for the two algorithms on the 5 generated data sets. As expected, our method performs consistently better in finding the true clustering structure on data generated by the co-clustering process. Since the two algorithms share the same block co-clustering structure, the advantage of our method is in its model for the hierarchical structure and fMRI signals. 3.2 Experiment We apply our method to data from an fMRI study where 8 subjects view 69 distinct images. Each image is repeated on average about 40 times in one of the two sessions in the experiment. The data includes 42 slices of 1.65mm thickness with in plane voxel size of 1.5mm, aligned with the temporal lobe (ventral visual pathway). As part of the standard preprocessing stream, the data was first motion-corrected separately for the two sessions [23], and then spatially smoothed with a Gaussian kernel of 3mm width. The time course data included 120 volumes per run and from 24 to 40 runs for each subject. We registered the data from the two sessions to the subject?s native anatomical space [24]. We removed noisy voxels from the analysis by performing an ANOVA test and only keeping the voxels for which the stimulus regressors significantly explained the variation in the time course (threshold p=10?4 uncorrected). This procedure selects on average about 6,000 voxels for each subject. Finally, to remove the idiosyncratic aspects of responses in different subjects, such as attention to particular stimuli, we regressed out the subject-average time course from voxel signals after removing the baseline and linear trend. We split trials of each image into two groups of equal size and consider each group as an independent stimulus forming a total of 138 stimuli. Hence, we can examine the consistency of our stimulus categorization with respect to identical trials. We use ? = 100, ? = 5, ? = 0.1, and ?1 = ?2 = 1 for the nonparametric prior. We initialize our algorithm 20 times and choose the solution that achieves the lowest Gibbs free energy. Fig. 4 shows the categories that the algorithm finds on the data from all 8 subjects. First, we note that stimulus pairs corresponding to the same image are generally assigned to the same category, confirming the consistency of the resuls across trials. Category 1 corresponds to the scene images and, interestingly, also includes all images of trees. This may suggest a high level category structure that is not merely driven by low level features. Such a structure is even more evident in the 4th category where images of a tiger that has a large face join human faces. Some other animals are clustered together with human bodies in categories 2 and 9. Shoes and cars, which have similar shapes, are clustered together in category 3 while tools are mainly found in category 6. The interaction between the learned categories and the functional units is summarized in the posterior unit-category activation probabilities E[?k,l ] ( Fig. 4, right ). The algorithm finds 18 units across all subjects. The largest unit does not show preference for any of the categories. Functional unit 2 is the most selective one and shows high activation for category 4 (faces). This finding agrees with previous studies that have discovered face-selective areas in the brain [25]. Other units show selectivity for different combinations of categories. For instance, Unit 6 prefers categories that mostly include body parts and animals, unit 8 prefers category 1 (scenes and trees), while the selectivity of unit 5 seems to be correlated with the pixel-size of the image. N j that represent the probabilities that difOur method further learns sets of variables {q(zji =k)}i=1 ferent voxels in subject j belong to functional unit k. Although the algorithm does not use any information about the spatial location of voxels, we can visualize the posterior membership probabilities in each subject as a spatial map. To see whether there is any degree of spatial consistency in the locations of the learned units across subjects, we align the brains of all subjects with the Montreal 6 Categories Unit 1 1: Unit 2 1 1 0.5 0.5 0.5 0 0 Unit 4 2: 1 0.5 0.5 0.5 0 0 0 Unit 8 Unit 9 1 1 1 0.5 0.5 0.5 0 Unit 10 5: Unit 6 1 0 4: 0 Unit 5 1 Unit 7 3: Unit 3 1 0 Unit 11 Unit 12 1 1 1 0.5 0.5 0.5 0 0 Unit 13 0 Unit 14 Unit 15 6: 1 1 1 7: 0.5 0.5 0.5 8: 0 9: 1 1 1 10: 0.5 0.5 0.5 11: 0 0 0 Unit 16 1 2 3 4 5 6 7 8 9 1011 Categories 0 Unit 17 Unit 18 1 2 3 4 5 6 7 8 9 1011 Categories 0 1 2 3 4 5 6 7 8 9 1011 Categories Figure 4: Categories (left) and activation probabilities of functional units (E[?k,l ]) (right) estimated by the algorithm from all 8 subjects in the study. 8 Subjects NBC 10 5 0 BAC 246 CA Group 1 Group 2 Voxels Stimuli Voxels Stimuli 1 0.75 0.5 0.25 0 Group 1 NMI Group 1 Group 2 Voxels Stimuli Voxels Stimuli 1 0.75 0.5 0.25 Unit 2 Unit 5 Unit 6 0 Figure 5: (Left) Spatial maps of functional unit overlap across subjects in the normalized space. For each voxel, we show the fraction of subjects in the group for which the voxel was assigned to the corresponding functional unit. We see that functional units with similar profiles between the two datasets show similar spatial extent as well. (Right) Comparison between the clustering robustness in the results of our algorithm (NBC) and the best results of Block Average Co-clustering (BAC) on the real data. Neurological Institute coordinate space using affine registration [26]. Fig. 5 (left) shows the average maps across subjects for units 2, 5, and 6 in the normalized space. Despite the relative sparsity of the maps, they have significant overlap across subjects. As with many other real world applications of clustering, the validation of results is challenging in the absence of ground truth. In order to assess the reliability of the results, we examine their consistency across subjects. We split the 8 subjects into two groups of 4 and perform the analysis on the two group data separately. Fig. 6 (left) shows the categories found for one of the two groups (group 1), which show good agreement with the categories found in the data from all subjects (categories are indexed based on the result of graph matching). As a way to quantify the stability of clustering across subjects, we compute the measures CA and NMI for the results in the two groups 7 Categories Categories 1: 1: 2: 2: 3: 3: 4: 4: 5: 5: 6: 6: 7: 7: 8: 9: 10: 11: 8: 9: Figure 6: Categories found by our algorithm in group 1 (left) and by BAC in all subjects for (l, k) = (14, 14) (right). relative to the results in the 8 subjects. We also apply the BAC algorithm to response values estimated via least squares regression in all 8 subjects and the two groups. Since the number of units and categories is not known a priori, we perform the BAC algorithm for all pairs of (l, k) such that 5 ? l ? 15 and k ? {10, 12, 14, 16, 18, 20}. Fig. 5 (right) compares the clustering measures for our method with those found by the best BAC results in terms of average CA and NMI measures (achieved with (l, k) = (6, 14) for CA, and (l, k) = (14, 14) for NMI). Fig. 6 (right) shows the categories for (l, k) = (14, 14), which appear to lack some of the structures found in our results. We also obtain better measures of stability compared to the best BAC results for clustering stimuli, while the measures are similar for clustering voxels. We note that in contrast to the results of BAC, our first unit is always considerably larger than all the others including about 70% of voxels. This seems neuroscientifically plausible since we expect large areas of the visual cortex to be involved in processing low level features and therefore incapable of distinguishing different objects. 4 Conclusion This paper proposes a model for learning large-scale functional structures in the brain responses of a group of subjects. We assume that the structure can be summarized in terms of functional units with similar responses to categories of stimuli. We derive a variational Bayesian inference scheme for our hierarchical nonparametric Bayesian model and apply it to both synthetic and real data. In an fMRI study of visual object recognition, our method finds meaningful structures in both object categories and functional units. This work is a step toward devising models for functional brain imaging data that explicitly encode our hypotheses about the structure in the brain functional organization. The assumption that functional units, categories, and their interactions are sufficient to describe the structure, although proved successful here, may be too restrictive in general. A more detailed characterization may be achieved through a feature-based representation where a stimulus can simultaneously be part of several categories (features). Likewise, a more careful treatment of the structure in the organization of brain areas may require incorporating spatial information. In this paper, we show that we can turn such basic insights into principled models that allow us to investigate the structures of interest in a data-driven fashion. By incorporating the properties of brain imaging signals into the model, we better utilize the data for making relevant inferences across subjects. 8 Acknowledgments We thank Ed Vul, Po-Jang Hsieh, and Nancy Kanwisher for the insight they have offered us throughout our collaboration, and also for providing the fMRI data. This research was supported in part by the NSF grants IIS/CRCNS 0904625, CAREER 0642971, the MIT McGovern Institute Neurotechnology Program grant, and NIH grants NIBIB NAMIC U54-EB005149 and NCRR NAC P41-RR13218. References [1] N. Kriegeskorte, M. Mur, D.A. Ruff, R. Kiani, J. Bodurka, H. Esteky, K. Tanaka, and P.A. Bandettini. Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron, 60(6):1126?1141, 2008. [2] B. Thirion and O. Faugeras. Feature characterization in fMRI data: the Information Bottleneck approach. MedIA, 8(4):403?419, 2004. [3] D. Lashkari and P. Golland. Exploratory fMRI analysis without spatial normalization. In IPMI, 2009. [4] D. Lashkari, E. Vul, N. Kanwisher, and P. Golland. Discovering structure in the space of fMRI selectivity profiles. NeuroImage, 50(3):1085?1098, 2010. [5] D. Lashkari, R. Sridharan, E. Vul, P.J. Hsieh, N. Kanwisher, and P. Golland. Nonparametric hierarchical Bayesian model for functional brain parcellation. In MMBIA, 2010. [6] A. Banerjee, I. Dhillon, J. Ghosh, S. Merugu, and D.S. Modha. A generalized maximum entropy approach to bregman co-clustering and matrix approximation. JMLR, 8:1919?1986, 2007. [7] S. Makni, P. Ciuciu, J. Idier, and J.-B. Poline. Joint detection-estimation of brain activity in functional MRI: a multichannel deconvolution solution. TSP, 53(9):3488?3502, 2005. [8] Y. Cheng and G.M. Church. Biclustering of expression data. In ISMB, 2000. [9] S.C. Madeira and A.L. Oliveira. Biclustering algorithms for biological data analysis: a survey. TCBB, 1(1):24?45, 2004. [10] Y. Kluger, R. Basri, J.T. Chang, and M. Gerstein. Spectral biclustering of microarray data: coclustering genes and conditions. Genome Research, 13(4):703?716, 2003. [11] B. Long, Z.M. Zhang, and P.S. Yu. A probabilistic framework for relational clustering. In ACM SIGKDD, 2007. [12] D. Lashkari and P. Golland. Coclustering with generative models. CSAIL Technical Report, 2009. [13] C. Kemp, J.B. Tenenbaum, T.L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. In AAAI, 2006. [14] K.A. Norman, S.M. Polyn, G.J. Detre, and J.V. Haxby. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in Cognitive Sciences, 10(9):424?430, 2006. [15] C.F. Beckmann and S.M. Smith. Probabilistic independent component analysis for functional magnetic resonance imaging. TMI, 23(2):137?152, 2004. [16] M.J. McKeown, S. Makeig, G.G. Brown, T.P. Jung, S.S. Kindermann, A.J. Bell, and T.J. Sejnowski. Analysis of fMRI data by blind separation into independent spatial components. Hum Brain Mapp, 6(3):160?188, 1998. [17] D. Endres and P. F?oldi?ak. Interpreting the neural code with Formal Concept Analysis. In NIPS, 2009. [18] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical dirichlet processes. JASA, 101(476):1566? 1581, 2006. [19] J. Pitman. Poisson?Dirichlet and GEM invariant distributions for split-and-merge transformations of an interval partition. Combinatorics, Prob, Comput, 11(5):501?514, 2002. [20] KJ Friston, AP Holmes, KJ Worsley, JP Poline, CD Frith, RSJ Frackowiak, et al. Statistical parametric maps in functional imaging: a general linear approach. Hum Brain Mapp, 2(4):189?210, 1994. [21] Y.W. Teh, K. Kurihara, and M. Welling. Collapsed variational inference for HDP. In NIPS, 2008. [22] M. Meil?a and D. Heckerman. An experimental comparison of model-based clustering methods. Machine Learning, 42(1):9?29, 2001. [23] R.W. Cox and A. Jesmanowicz. Real-time 3D image registration for functional MRI. Magn Reson Med, 42(6):1014?1018, 1999. [24] D.N. Greve and B. Fischl. Accurate and robust brain image alignment using boundary-based registration. NeuroImage, 48(1):63?72, 2009. [25] N. Kanwisher and G. Yovel. The fusiform face area: a cortical region specialized for the perception of faces. R Soc Lond Phil Trans, Series B, 361(1476):2109?2128, 2006. [26] J. Talairach and P. Tournoux. Co-planar Stereotaxic Atlas of the Human Brain. Thieme, New York, 1988. 9
3912 |@word trial:4 cox:1 mri:3 fusiform:1 proportion:1 seems:2 kriegeskorte:1 seek:1 lobe:1 simplifying:1 hsieh:2 splitmerge:1 initial:2 configuration:1 series:1 hemodynamic:1 interestingly:1 existing:1 comparing:1 activation:22 additive:1 partition:1 informative:1 confirming:1 shape:1 enables:2 haxby:1 remove:1 atlas:1 update:3 intelligence:1 generative:2 devising:1 discovering:1 accordingly:1 plane:1 reciprocal:1 smith:1 yamada:1 blei:1 characterization:2 node:2 location:4 preference:1 zhang:1 along:2 constructed:1 beta:2 consists:1 pathway:1 introduce:1 nbc:4 inter:1 kanwisher:4 ica:2 expected:3 indeed:1 examine:3 multi:1 brain:30 becomes:1 provided:1 discover:1 underlying:1 medium:1 advent:1 lowest:1 evolved:1 namic:1 thieme:1 minimizes:1 monkey:2 unobserved:1 transformation:2 finding:2 nj:1 ghosh:1 temporal:5 hypothetical:1 act:1 prohibitively:1 makeig:1 stick:2 unit:80 grant:3 appear:1 organize:1 positive:2 magn:1 local:1 despite:1 ak:1 meil:1 modha:1 merge:1 ap:1 initialization:3 studied:1 challenging:1 co:22 ismb:1 acknowledgment:1 union:1 block:6 procedure:1 aji:7 area:7 bell:1 mult:2 significantly:1 matching:4 induce:2 griffith:2 specificity:6 coclustering:2 suggest:1 collapsed:3 applying:2 map:5 phil:1 attention:1 focused:1 formulate:1 survey:1 rule:3 insight:2 holmes:1 retrieve:1 stability:2 exploratory:1 coordinate:2 variation:1 reson:1 distinguishing:1 hypothesis:1 agreement:1 element:1 trend:3 recognition:5 native:1 observed:3 polyn:1 capture:1 region:2 removed:1 lashkari:5 principled:1 ipmi:1 solving:2 bipartite:1 po:1 joint:3 frackowiak:1 derivation:2 train:1 distinct:3 describe:1 sejnowski:1 artificial:1 mcgovern:1 faugeras:1 supplementary:1 larger:1 plausible:1 otherwise:1 statistic:1 jointly:1 noisy:2 transform:1 tsp:1 beal:1 advantage:1 interaction:4 relevant:2 aligned:1 achieve:1 normalize:1 cluster:14 categorization:1 perfect:1 franchise:1 mckeown:1 object:10 derive:2 informs:1 montreal:1 pose:1 madeira:1 measured:2 eq:1 soc:1 auxiliary:2 c:7 uncorrected:1 quantify:1 closely:3 stochastic:2 centered:1 human:6 kluger:1 material:1 ja:1 require:1 clustered:2 decompose:1 biological:2 mm:3 around:1 ground:4 normal:5 visualize:1 vary:1 ventral:1 achieves:1 estimation:2 integrates:1 currently:1 kindermann:1 largest:1 agrees:1 tool:1 weighted:1 mit:2 activates:1 gaussian:2 always:2 resuls:1 encode:1 derived:1 vk:4 consistently:1 bernoulli:1 indicates:2 mainly:2 greatly:1 contrast:2 sigkdd:1 baseline:3 inference:12 membership:14 hidden:3 selective:4 selects:1 pixel:1 among:4 classification:2 denoted:1 priori:1 proposes:1 animal:2 resonance:1 integration:1 initialize:5 mutual:3 spatial:9 equal:1 construct:2 constrained:1 sampling:3 identical:3 represents:3 yu:1 unsupervised:1 fmri:35 others:1 stimulus:56 report:1 employ:3 few:1 simultaneously:2 gamma:1 phase:1 detection:1 organization:4 interest:1 investigate:3 alignment:1 activated:3 accurate:1 bregman:1 edge:1 ncrr:1 tree:2 indexed:1 euclidean:1 initialized:3 re:1 instance:3 column:1 modeling:1 soft:1 introducing:1 uninteresting:1 delay:1 successful:1 too:1 reported:1 dependency:1 answer:1 thickness:1 dir:1 endres:1 synthetic:6 considerably:1 peak:1 csail:2 probabilistic:3 regressor:1 together:2 reflect:1 aaai:1 choose:2 cognitive:2 worsley:1 bandettini:1 account:3 sec:3 summarized:2 includes:3 combinatorics:1 explicitly:3 onset:1 stream:1 blind:1 view:1 closed:1 characterizes:1 tmi:1 contribution:1 ass:1 square:5 minimize:1 accuracy:3 variance:2 merugu:1 likewise:1 yield:2 conceptually:1 bayesian:6 accurately:1 explain:1 ed:1 energy:4 involved:1 invasive:1 dataset:10 proved:1 massachusetts:1 treatment:1 nancy:1 knowledge:1 car:1 organized:1 amplitude:3 supervised:1 follow:1 planar:1 response:27 formulation:3 furthermore:1 banerjee:1 overlapping:1 lack:1 defines:2 aj:1 quality:2 nac:1 effect:1 concept:3 normalized:5 true:5 norman:1 brown:1 hence:2 assigned:4 spatially:2 laboratory:1 dhillon:1 width:1 nuisance:3 inferior:2 generalized:1 outline:1 evident:1 demonstrate:1 mapp:2 performs:1 motion:1 interpreting:1 image:12 variational:8 wise:1 novel:1 recently:2 nih:1 common:1 specialized:1 functional:46 ji:6 jp:1 volume:1 belong:2 he:1 refer:1 expressing:3 significant:1 danial:2 cambridge:1 gibbs:5 consistency:4 similarly:2 session:5 reliability:1 cortex:3 align:1 posterior:4 dish:1 driven:2 selectivity:5 incapable:1 binary:4 vul:3 minimum:2 impose:1 signal:15 ii:2 technical:1 faster:2 characterized:1 match:2 long:2 variant:1 regression:4 basic:1 poisson:1 represent:3 kernel:1 normalization:1 achieved:2 golland:5 separately:2 interval:1 esteky:1 microarray:1 biased:1 rest:1 subject:56 med:1 sridharan:2 jordan:1 iii:1 split:3 restaurant:2 idea:1 regarding:1 bottleneck:1 whether:2 expression:1 ul:2 york:1 prefers:2 generally:1 polina:2 detailed:1 nonparametric:8 oliveira:1 tenenbaum:1 mmbia:1 bac:14 category:65 kiani:1 multichannel:1 generate:2 specifies:1 nsf:1 estimated:7 correctly:1 per:1 anatomical:1 fischl:1 write:1 express:2 group:22 key:2 threshold:1 neither:1 anova:1 registration:3 ruff:1 utilize:1 imaging:6 graph:3 merely:1 fraction:4 sum:1 run:3 prob:1 uncertainty:1 respond:1 tailor:1 named:1 throughout:2 ueda:1 separation:1 gerstein:1 summarizes:1 cheng:1 activity:2 rr13218:1 constraint:1 scene:2 encodes:2 regressed:1 bodurka:1 generates:1 aspect:3 lond:1 performing:1 gst:2 relatively:1 according:2 combination:2 describes:2 across:17 nmi:7 heckerman:1 making:2 explained:2 invariant:2 handling:1 remains:1 turn:3 thirion:1 needed:1 mind:1 studying:1 apply:7 hierarchical:16 appropriate:1 spectral:1 magnetic:1 robustness:1 jang:1 assumes:2 dirichlet:5 clustering:53 include:2 graphical:2 parcellation:1 restrictive:1 chinese:1 rsj:1 question:3 spike:1 hum:2 parametric:1 dp:1 distance:2 separate:2 link:1 thank:1 extent:1 consensus:1 kemp:1 toward:2 jit:3 rjk:1 assuming:2 hdp:6 code:1 modeled:1 relationship:4 index:3 providing:1 beckmann:1 setup:1 neuroimaging:2 unfortunately:1 idiosyncratic:1 mostly:1 ciuciu:1 design:1 perform:2 teh:2 tournoux:1 observation:2 convolution:1 datasets:1 neuron:1 ramesh:1 finite:2 descent:1 oldi:1 relational:3 variability:6 discovered:1 smoothed:1 introduced:1 neuroscientifically:1 pair:2 learned:3 registered:1 tanaka:1 nip:2 trans:1 beyond:1 pattern:3 perception:1 sparsity:1 reading:1 program:1 including:2 critical:1 overlap:2 treated:1 natural:2 eh:1 friston:1 advanced:1 scheme:4 technology:1 identifies:1 church:1 categorical:1 extract:1 kj:2 prior:19 voxels:34 relative:2 expect:2 validation:1 integrate:1 degree:2 jasa:1 affine:1 sufficient:2 consistent:2 offered:1 principle:1 share:1 collaboration:1 cd:1 row:3 course:8 poline:2 jung:1 supported:1 free:4 keeping:1 formal:2 allow:2 jh:2 institute:3 mur:1 face:6 pitman:1 distributed:1 slice:1 boundary:1 dimension:1 cortical:1 world:1 genome:1 ferent:1 preprocessing:1 regressors:1 u54:1 voxel:31 welling:1 approximate:1 nibib:1 basri:1 gene:1 active:2 gem:4 assumed:1 search:1 iterative:1 table:1 learn:1 robust:1 ca:7 career:1 frith:1 complex:1 domain:1 factorizable:1 main:1 noise:6 hyperparameters:2 profile:3 repeated:1 body:2 neuronal:1 fig:9 representative:1 join:1 crcns:1 fashion:1 slow:1 detre:1 neuroimage:2 comput:1 answering:1 breaking:2 hrf:2 third:1 jmlr:1 learns:3 removing:1 specific:3 deconvolution:1 intrinsic:2 exists:1 incorporating:2 idier:1 p41:1 engen:1 zji:8 suited:1 entropy:2 interrelated:1 infinitely:1 forming:1 shoe:1 visual:8 ordered:1 mccloskey:1 neurological:1 biclustering:3 chang:1 applies:1 corresponds:1 truth:4 talairach:1 acm:1 ma:1 careful:1 shared:1 absence:2 man:1 hard:1 tiger:1 included:1 infinite:5 specifically:1 determined:1 corrected:1 kurihara:1 called:1 total:1 experimental:2 meaningful:5 evaluate:1 stereotaxic:1 correlated:1
3,216
3,913
A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction Raquel Urtasun TTI Chicago [email protected] Tamir Hazan TTI Chicago [email protected] Abstract In this paper we propose an approximated structured prediction framework for large scale graphical models and derive message-passing algorithms for learning their parameters efficiently. We first relate CRFs and structured SVMs and show that in CRFs a variant of the log-partition function, known as the soft-max, smoothly approximates the hinge loss function of structured SVMs. We then propose an intuitive approximation for the structured prediction problem, using duality, based on a local entropy approximation and derive an efficient messagepassing algorithm that is guaranteed to converge. Unlike existing approaches, this allows us to learn efficiently graphical models with cycles and very large number of parameters. 1 Introduction Unlike standard supervised learning problems which involve simple scalar outputs, structured prediction deals with structured outputs such as sequences, grids, or more general graphs. Ideally, one would want to make joint predictions on the structured labels instead of simply predicting each element independently, as this additionally accounts for the statistical correlations between label elements, as well as between training examples and their labels. These properties make structured prediction appealing for a wide range of applications such as image segmentation, image denoising, sequence labeling and natural language parsing. Several structured prediction models have been recently proposed, including log-likelihood models such as conditional random fields (CRFs, [10]), and structured support vector machines (structured SVMs) such as maximum-margin Markov networks (M3Ns [21]). For CRFs, learning is done by minimizing a convex function composed of a negative log-likelihood loss and a regularization term. Learning structured SVMs is done by minimizing the convex regularized structured hinge loss. Despite the convexity of the objective functions, finding the optimal parameters of these models can be computationally expensive since it involves exponentially many labels. When the label structure corresponds to a tree, learning can be done efficiently by using belief propagation as a subroutine; The sum-product algorithm is typically used in CRFs and the max-product algorithm in structured SVMs. In general, when the label structure corresponds to a general graph, one cannot compute the objective nor the gradient exactly, except for some special cases in structured SVMs, such as matching and sub-modular functions [22]. Therefore, one usually resorts to approximate inference algorithms, cf. [2] for structured SVMs and [20, 12] for CRFs. However, the approximate inference algorithms are computationally too expensive to be used as a subroutine of the learning algorithm, therefore they cannot be applied efficiently for large scale structured prediction problems. Also, it is not clear how to define a stopping criteria for these approaches as the objective does not monotonically decrease since the objective and the gradient are both approximated. This might result in poor approximations. In this paper we propose an approximated structured prediction framework for large scale graphical models and derive message-passing algorithms for learning their parameters efficiently. We relate CRFs and structured SVMs, and show that in CRFs a variant of the log-partition function, known as 1 soft-max, smoothly approximates the hinge loss function of structured SVMs. We then propose an intuitive approximation for the structured prediction problem, using duality, based on a local entropy approximation and derive an efficient message-passing algorithm that is guaranteed to converge. Unlike existing approaches, this allows us to learn efficiently graphical models with cycles and very large number of parameters. We demonstrate the effectiveness of our approach in an image denoising task. This task was previously solved by sharing parameters across cliques. In contrast, our algorithm is able to efficiently learn large number of parameters resulting in orders of magnitude better prediction. In the remaining of the paper, we first relate CRFs and structured SVMs in Section 3, show our approximate prediction framework in Section 4, derive a message-passing algorithm to solve the approximated problem efficiently in Section 5, and show our experimental evaluation. 2 Regularized Structured Loss Minimization Consider a supervised learning setting with objects x ? X and labels y ? Y. In structured prediction the labels may be sequences, trees, grids, or other high-dimensional objects with internal structure. Consider a function ? : X ? Y ? Rd that maps (x, y) pairs to feature vectors. Our goal is to construct a linear prediction rule y? (x) = argmax ? > ?(x, y) y?Y d with parameters ? ? R , such that y? (x) is a good approximation to the true label of x. Intuitively one would like to minimize the loss `(y, y? ) incurred by using ? to predict the label of x, given that the true label is y. However, since the prediction is norm-insensitive this method can lead to over fitting. Therefore the parameters ? are typically learned by minimizing the norm-dependent loss X ? x, y) + C k?kp , `(?, (1) p p (x,y)?S defined over a training set S. The function `? is a surrogate loss of the true loss `(y, y?). In this paper we focus on structured SVMs and CRFs which are the most common structured prediction models. The first definition of structured SVMs used the structured hinge loss [21] n o `?hinge (?, x, y) = max `(y, y?) + ? > ?(x, y?) ? ? > ?(x, y) y??Y The structured hinge loss upper bounds the true loss function, and corresponds to a maximummargin approach that explicitly penalizes training examples (x, y) for which ? > ?(x, y) < `(y, y? (x)) + ? > ?(x, y? (x)). The second loss function that we consider is based on log-linear models, and is commonly used in CRFs [10]. Let the conditional distribution be     X 1 p(? y |x, y; ?) = exp `(y, y?) + ? > ?(x, y?) , Z(x, y) = exp `(y, y?) + ? > ?(x, y?) Z(x, y) y??Y where `(y, y?) is a prior distribution and Z(x, y) the partition function. The surrogate loss function is then the negative log-likelihood under the parameters ? 1 `?log (?, x, y) = ln . p(? y |x, y; ?) In structured SVMs and CRFs a convex loss function and a convex regularization are minimized. 3 One parameter extension of CRFs and Structured SVMs In CRFs one aims to minimize the regularized negative log-likelihood of the conditional distribution p(? y |x, y; ?) which decomposes into the log-partition and the linear term ? > ?(x, y). Hence the problem of minimizing the regularized loss in (1) with the loss function `?log can be written as ? ? ? X ? C (CRF) min ln Z(x, y) ? d> ? + k?kpp , ? ? ? p (x,y)?S 2 where (x, y) ? S ranges over training pairs and d = means. P (x,y)?S ?(x, y) is the vector of empirical Structured SVMs aim at minimizing the regularized hinge loss `?hinge (?, x, y), which measures the loss of the label y? (x) that most violates the training pair (x, y) ? S by more than `(y, y? (x)). Since y? (x) is independent of the training label y, the structured SVM program takes the form: ? ? ? X ? n o C (structured SVM) min max `(y, y?) + ? > ?(x, y?) ? d> ? + k?kpp , ? ? ? y??Y p (x,y)?S where (x, y) ? S ranges over the training pairs, and d is the vector of empirical means. In the following we deal with both structured prediction tasks (i.e., structured SVMs and CRFs) as two instances of  the same framework,  by extending the partition function to norms, namely > Z (x, y) = k exp `(y, y?) + ? ?(x, y?) k1/ , where the norm is computed for the vector ranging over y? ? Y. Using the norm formulation we move from the partition function, for  = 1, to the maximum over the exponential function for  = 0. Equivalently, we relate the log-partition and the max-function by the soft-max function ! X `(y, y?) + ? > ?(x, y?) ln Z (x, y) =  ln exp (2)  y??Y For  = 1 the soft-max function reduces to the log-partition function, and for  = 0 it reduces to the max-function. Moreover, when  ? 0 the soft-max function is a smooth approximation of the max-function, in the same way the `1/ -norm is a smooth approximation of the `? -norm. This smooth approximation of the max-function is used in different areas of research [8]. We thus define the structured prediction problem as ? ? ? ? X C (3) (structured-prediction) min ln Z (x, y) ? d> ? + k?kpp , ? ? ? p (x,y)?S which is a one-parameter extension of CRFs and structured SVMs, i.e.,  = 1 and  = 0 respectively. Similarly to CRFs and structured SVMs [11, 16], one can use gradient methods to optimize structured prediction. The gradient of ?r takes the form X X p (? y |x, y; ?)?r (x, y?) ? dr + |?r |p?1 sign(?r ), (4) (x,y)?S where y? 1 p (? y |x, y; ?) = exp Z (x, y)1/ `(y, y?) + ? > ?(x, y?)  ! (5) is a probability distribution over the possible labels y? ? Y. When  ? 0 this probability distribution gets concentrated around its maximal values, since all its elements are raised to the power of a very large number (i.e., 1/). Therefore for  = 0 we get a structured SVM subgradient. In many real-life applications the labels y ? Y are n-tuples, y = (y1 , ..., yn ), hence there are exponentially many labels in Y. The feature maps usually describe relations between subsets of label elements y? ? {y1 , ..., yn }, and local interactions on single label elements yv , namely X X ?r (x, y?1 , ..., y?n ) = ?r,v (x, y?v ) + ?r,? (x, y?? ). (6) v?Vr,x ??Er,x Each feature ?r (x, y?) can be described by its factor graph Gr,x , a bipartite graph with one set of nodes corresponding to Vr,x and the other set corresponds to Er,x . An edge connects a single label node v ? Vr,x with a subset of label nodes ? ? Er,x if and only if yv ? y? . In the following we consider the factor graph G = ?r Gr which is the union of all factor graphs. We denote by N (v) and N (?) the set of neighbors of v and ? respectively, P in the factor graph G. For clarity in the n presentation we consider fully factorized loss `(y, y?) = v=1 `v (yv , y?v ), although our derivation naturally extends to any graphical model representing the interactions `(y, y?). 3 To compute the soft-max and the marginal probabilities, p (? yv |x, y; ?) and p (? y? |x, y; ?), exponentially many labels have to be considered. This is in general computationally prohibitive, and thus one has to rely on inference and message-passing algorithms. When the factor graph has no cycles inference can be efficiently computed using belief propagation, but in the presence of cycles inference can only be approximated [25, 26, 7, 5, 13]. There are two main problems when dealing with graphs with cycles and approximate inference: efficiency and accuracy. For graphs with cycles there are no guarantees on the number of steps the message-passing algorithm requires till convergence, therefore it is computationally costly to run it as a subroutine. Moreover, as these message-passing algorithms have no guarantees on the quality of their solution, the gradient and the objective function can only be approximated, and one cannot know if the update rule decreased or increased the structured prediction objective. In contrast, in this work we propose to approximate the structured prediction problem and to efficiently solve the approximated problem exactly using message-passing. Intuitively, we suggest a principled way to run the approximate inference updates for few steps, while re-using the messages of previous steps to extract intermediate beliefs. These beliefs are used to update ?r , although the intermediate beliefs may not agree on their marginal probabilities. This allows us to efficiently learn graphical models with large number of parameters. 4 Approximate Structured Prediction The structured prediction objective in (3) and its gradients defined in (4) cannot be computed efficiently for general graphs since both involve computing the soft-max function, ln Z (x, y), and the marginal probabilities, p (? yv |x, y; ?) and p (? y? |x, y; ?), which take into account exponentially many elements y? ? Y . In the following we suggest an intuitive approximation for structured prediction, based on its dual formulation. Since the dual of the soft-max is the entropy barrier, it follows that the dual program for structured prediction is governed by the entropy function of the probabilities px,y (? y ). The following duality formulation is known for CRFs when  = 1 with `22 regularization, and for structured SVM when  = 0 with `22 regularization, [11, 21, 1]. Here we derive the dual program for every  and every `pp regularization using conjugate duality: Claim 1 The dual program of the structured prediction program in (3) takes the form ? max px,y (? y )??Y X (x,y)?S ?H(px,y ) + ? X y? C 1?q px,y (? y )`(y, y?)? ? q q X X px,y (? y )?(x, y?) ? d , (x,y)?S y??Y q where ?Y is the probability simplex over Y and H(px,y ) = ? P y ) ln px,y (? y) y? px,y (? is the entropy. Proof: In [6] When  = 1 the CRF dual program reduces to the well-known duality relation between the loglikelihood and the entropy. When  = 0 we obtain the dual formulation of structured SVM which emphasizes the duality relation between the max-function and the probability simplex. In general, Claim 1 describes the relation between the soft-max function and the entropy barrier over the probability simplex. The dual program in Claim 1 considers the probabilities px,y (? y ) over exponentially many labels y? ? Y, as well as their entropies H(px,y ). However, when we take into account the graphical model imposed by the features, Gr,x , we observe that the linear terms in the dual formulation consider the marginals probabilities px,y (? yv ) and px,y (? y? ). We thus propose to replace the marginal probabilities with their P Pcorresponding beliefs, and to replace the entropy term by the local entropies ? c? H(bx,y,? ) + v cv H(bx,y,v ) over the beliefs. Whenever , cv , c? ? 0, the approximated dual is concave and it corresponds to a convex dual program. By deriving its dual we obtain our approximated structured prediction, for which we construct an efficient algorithm in Section 5. 4 LBP-SGD LBP-SMD LBP-BFGS MF-SGD MF-SMD MF-BFGS Ours I1 2.7344 2.7344 2.7417 3.0469 2.9688 3.0005 0.0488 Gaussian noise I2 I3 2.4707 3.2275 2.4731 3.2324 2.4194 3.1299 3.0762 4,1382 3.0640 3.8721 2.7783 3.6157 0.0073 0.1294 I4 2.3193 2.3145 2.4023 2.9053 14.4360 2.4780 0.1318 I1 5.2905 5.2954 5.2148 10.0488 ? 5.2661 0.0537 Bimodal noise I2 I3 4.4751 6.8164 4.4678 6.7578 4.3994 6.0278 41.0718 29.6338 ? ? 4.6167 6.4624 0.0244 0.1221 I4 7.2510 7.2583 6.6211 53.6035 ? 7.2510 0.9277 Figure 1: Gaussian and bimodal noise: Comparison of our approach to loopy belief propagation and mean field approximations when optimizing using BFGS, SGD and SMD. Note that our approach significantly outperforms all the baselines. MF-SMD did not work for Bimodal noise. Theorem 1 The approximation of the structured prediction program in (3) takes the form ! P P X X `v (yv , y?v ) + r:v?Vr,x ?r ?r,v (x, y?v ) ? ??N (v) ?x,y,v?? (? yv ) exp min cv ln ?x,y,v?? ,? cv y?v (x,y)?S,v ! P P X X ?? ) + v?N (?) ?x,y,v?? (? yv ) C r:??Er ?r ?r,? (x, y p exp ? d> ? ? k?kp + c? ln c? p (x,y)?S,? y?? Proof: In [6] 5 Message-Passing Algorithm for Approximated Structured Prediction In the following we describe a block coordinate descent algorithm for the approximated structured prediction program of Theorem 1. Coordinate descent methods are appealing as they optimize a small number of variables while holding the rest fixed, therefore they are efficient and can be easily parallelized. Since the primal program is lower bounded by the dual program, the primal objective function is guaranteed to converge. We begin by describing how to find the optimal set of variables related to a node v in the graphical model, namely ?x,y,v?? (? yv ) for every ? ? N (v), every y?v and every (x, y) ? S. Lemma 1 Given a vertex v in the graphical model, the optimal ?x,y,v?? (? yv ) for every ? ? N (v), y?v ? Yv , (x, y) ? S in the approximated program of Theorem 1 satisfies ? !? P P X ?? ) + u?N (?)\v ?x,y,u?? (? yu ) r:??Er,x ?r ?r,? (x, y ? ?x,y,??v (? yv ) = c? ln ? exp c? y?? \? yv ? ? X X c? ? ?x,y,v?? (? yv ) = `v (yv , y?v ) + ?r ?r,v (x, y?v ) + ?x,y,??v (? yv )? ? ?x,y,??v (? yv ) + cx,y,v?? c?v r:v?Vr,x 1 ??N (v) P for every constant cx,y,v?? , where c?v = cv + ??N (v) c? . In particular, if either  and/or c? are zero then ?x,y,??v corresponds to the `? norm and can be computed by the max-function. Moreover, if either  and/or c? are zero in the objective, then the optimal ?x,y,v?? can be computed for any arbitrary c? > 0, and similarly for cv > 0. Proof: In [6] It is computationally appealing to find the optimal ?x,y,v?? (? yv ). When the optimal value cannot be found, one usually takes a step in the direction of the negative gradient and the objective function needs to be computed to ensure that the chosen step size reduces the objective. Obviously, computing the objective function at every iteration significantly slows the algorithm. When the optimal ?x,y,v?? (? yv ) can be found, the block coordinate descent algorithm can be executed efficiently in distributed manner, since every ?x,y,v?? (? yv ) can be computed independently. The only interactions occur when computing the normalization step cx,y,v?? . This allows for easy computation in GPUs. We now turn to describe how to change ? in order to improve the approximated structured prediction. Since we cannot find the optimal ?r while holding the rest fixed, we perform a step in the direction 1 For numerical stability in our algorithm we set cx,y,v?? such that 5 P y ?v ?x,y,v?? (? yv ) = 0 of the negative gradient, when , c? , ci are positive, or in the direction of the subgradient otherwise. We choose the step size ? to guarantee a descent on the objective. Lemma 2 The gradient of the approximated structured prediction program in Theorem 1 with respect to ?r equals to X X bx,y,v (? yv )?r,v (x, y?v ) + bx,y,? (? y? )?r,? (x, y?? ) ? dr + C ? |?r |p?1 ? sign(?r ), (x,y)?S,v?Vr,x ,? yv (x,y)?S,??Er,x ,? y? where bx,y,v (? yv ) ? exp `v (yv , y?v ) + P bx,y,? (? y? ) ? exp r:??Er,x P r:v?Vr,x ?r ?r,v (x, y?v ) ? ?r ?r,? (x, y?? ) + cv P v?N (?) P ??N (v) ?x,y,v?? (? yv ) ?x,y,v?? (? yv ) ! ! c? However, if either  and/or c? equal zero, then the beliefs bx,y,? (? y? ) can be taken from the set of probabilityndistributions over support of the max-beliefs, namely y?? ) > 0 only if o bx,y,? (? P P ?? ) + v?N (?) ?x,y,v?? (? y??? ? argmaxy?? y? ) . Similarly for bx,y,v (? yv? ) r:??Er,x ?r ?r,? (x, y whenever  and/or cv equal zero. Proof: In [6] Lemmas 1 and 2 describe the coordinate descent algorithm for the approximated structured prediction in Theorem 1. We refer the reader to [6] for a summary of our algorithm. The coordinate descent algorithm is guaranteed to converge, as it monotonically decreases the approximated structured prediction objective in Theorem 1, which is lower bounded by its dual program. However, convergence to the global minimum cannot be guaranteed in all cases. In particular, for  = 0 the coordinate descent on the approximated structured SVMs is not guaranteed to converge to its global minimum, unless one uses subgradient methods which are not monotonically decreasing. Moreover, even when we are guaranteed to converge to the global minimum, i.e., , c? , cv > 0, the sequence of variables ?x,y,v?? (? yv ) generated by the algorithm is not guaranteed to converge to an optimal solution, nor to be bounded. As a trivial example, adding an arbitrary constant to the variables, ?x,y,v?? (? yv ) + c, does not change the objective value, hence the algorithm can generate non-decreasing unbounded sequences. However, the beliefs generated by the algorithm are bounded and guaranteed to converge to the solution of the dual approximated structured prediction problem. Claim 2 The block coordinate descent algorithm in lemmas 1 and 2 monotonically reduces the approximated structured prediction objective in Theorem 1, therefore the value of its objective is guaranteed to converge. Moreover, if , c? , cv > 0, the objective is guaranteed to converge to the global minimum, and its sequence of beliefs are guaranteed to converge to the unique solution of the approximated structured prediction dual. Proof: In [6] The convergence result has a practical implication, describing the ways we can estimate the convergence of the algorithm, either by the primal objective, the dual objective or the beliefs. The approximated structured prediction can also be used for non-concave entropy approximations, such as the Bethe entropy, where c? > 0 and cv < 0. In this case the algorithm is well defined, and its stationary points correspond to the stationary points of the approximated structured prediction and its dual. Intuitively, this statement holds since the coordinate descent algorithm iterates over points ?x,y,v?? (? yv ), ?r with vanishing gradients. Equivalently the algorithm iterates over saddle points ?x,y,v?? (? yv ), bx,y,v (? yv ), bx,y,? (? y? ) and (?r , zr ) of the Lagrangian defined in Theorem 1. Whenever the dual program is concave these saddle points are optimal points of the convex primal, but for non-concave dual the algorithm iterates over saddle points. This is summarized in the claim below: Claim 3 Whenever the approximated structured prediction is non convex, i.e., , c? > 0 and cv < 0, the algorithm in lemmas 1 and 2 is not guaranteed to converge, but whenever it converges it reaches a stationary point of the primal and dual approximated structured prediction programs. Proof: In [6] 6 Figure 2: Denoising results: Gaussian (left) and Bimodal (right) noise. 6 Experimental evaluation We performed experiments on 2D grids since they are widely used to represent images, and have many cycles. We first investigate the role of  in the accuracy and running time of our algorithm, for fixed c? , cv = 1. We used a 10 ? 10 binary image and randomly generated 10 corrupted samples flipping every bit with 0.2 probability. We trained the model using CRF, structured-SVM and our approach for  = {1, 0.5, 0.01, 0}, ranging from approximated CRFs ( = 1) to approximated structured SVM ( = 0) and its smooth version ( = 0.01). The runtime for CRF and structuredSVM is order of magnitudes larger than our method since they require exact inference for every training example and every iteration of the algorithm. For the approximated structured prediction, the runtimes are 323, 324, 326, 294 seconds for  = {1, 0.5, 0.01, 0} respectively. As  gets smaller the runtime slightly increases, but it decreases for  = 0 since the `? norm is computed efficiently using the max function. However,  = 0 is less accurate than  = 0.01; When the approximated structured SVM converges, the gap between the primal and dual objectives was 1.3, and only 10?5 for  > 0. This is to be expected since the approximated structured SVM is non-smooth (Claim 2), and we did not used subgradient methods to ensure convergence to the optimal solution. We generated test images in a similar fashion while using the same  for training and testing. In this setting both CRF and structured-SVM performed well, with 2 misclassifications. For the approximated structured prediction, we obtained 2 misclassifications for  > 0. We also evaluated the quality of the solution using different values of  for training and inference [24]. When predicting with smaller  than the one used for learning the results are marginally worse than when predicting with the same . However, when predicting with larger , the results get significantly worse, e.g., learning with  = 0.01 and predicting with  = 1 results in 10 errors, and only 2 when  = 0.01. The main advantage of our algorithm is that it can efficiently learn many parameters. We now compared in a 5 ? 5 dataset a model learned with different parameters for every edge and vertex (? 300 parameters) and a model learned with parameters shared among the vertices and edges (2 parameters for edges and 2 for vertices) [9]. Using large number of parameters increases performance: sharing parameters resulted in 16 misclassifications, while optimizing over the 300 parameters resulted in 2 errors. Our algorithm avoids overfitting in this case, we conjecture it is due to the regularization. We now compare our approach to state-of-the-art CRF solvers on the binary image dataset of [9] that consists of 4 different 64 ? 64 base images. Each base image was corrupted 50 times with each type of noise. Following [23], we trained different models to denoise each individual image, using 40 examples for training and 10 for test. We compare our approach to approximating the conditional likelihood using loopy belief propagation (LBP) and mean field approximation (MF). For each of these approximations, we use stochastic gradient descent (SGD), stochastic meta-descent (SMD) and BFGS to learn the parameters. We do not report pseudolikelihood (PL) results since it did not work. The same behavior of PL was noticed by [23]. To reduce the computational complexity and the chances of convergence, [9, 23] forced their parameters to be shared across all nodes such that ?i, ?i = ?n and ?i, ?j ? N (i), ?ij = ?e . In contrast, since our approach is efficient, we can exploit the full flexibility of the graph and learn more than 10, 000 parameters. This is computationally prohibitive with the baselines. We use the pixel values as node potentials and an Ising model with only bias for the edge potentials, i.e., ?i,j = [1, ?1; ?1, 1]. For all experiments we use  = 1, and p = 2. For the baselines, we use the code, features and optimal parameters of [23]. Under the first noise model, each pixel was corrupted via i.i.d. Gaussian noise with mean 0 and standard deviation of 0.3. Fig. 1 depicts test error in (%) for the different base images (i.e., I1 , . . . , I4 ). Note that our approach outperforms considerably the loopy belief propagation and mean field approximations for all optimization criteria (BFGS, SGD, SMD). For example, for the first base image the error of our approach is 0.0488%, which is equivalent to a 2 pixels error on average. In contrast 7 ?2000 ?2000 Primal Dual Primal Dual ?3000 ?3000 ?4000 ?4000 ?5000 ?5000 ?6000 ?6000 ?7000 ?7000 ?8000 0 5 10 15 20 ?8000 25 Iterations 0 5 10 15 20 25 Iterations (Gaussian) (Bimodal) Figure 3: Convergence. Primal and dual train errors for I1 . the best baseline gets 112 pixels wrong on average. Fig. 2 (left) depicts test examples as well as our denoising results. Note that our approach is able to cope with large amounts of noise. Under the second noise model, each pixel was corrupted with an independent mixture of Gaussians. For each class, a mixture of 2 Gaussians with equal mixing weights was used, yielding the Bimodal noise. The mixture model parameters were (0.08, 0.03) and (0.46, 0.03) for the first class and (0.55, 0.02) and (0.42, 0.10) for the second class, with (a, b) a Gaussian with mean a and standard deviation b. Fig. 1 depicts test error in (%) for the different base images. As before, our approach outperforms all the baselines. We do not report MF-SMD results since it did not work. Denoised images are shown in Fig. 2 (right). We now show how our algorithm converges in a few iterations. Fig. 3 depicts the primal and dual training errors as a function of the number of iterations. Note that our algorithm converges, and the dual and primal values are very tight after a few iterations. 7 Related Work For the special case of CRFs, the idea of approximating the entropy function with local entropies appears in [24, 3]. In particular, [24] proved that using a concave entropy approximation gives robust prediction. [3] optimized the non-concave Bethe entropy c? = 1, cv = 1 ? |N (v)|, by repeatedly maximizing its concave approximation, thus converging in few concave iterations. Our work differs from these works in two aspects: we derive an efficient algorithm in Section 5 for the concave approximated program (c? , cv > 0) and our framework and algorithm include structured SVMs, as well as their smooth approximation when  ? 0. Some forms of approximated structured prediction were investigated for the special cases of CRFs. In [18] a similar program was used, but without the Lagrange multipliers ?x,y,v?? (? yv ) and no regularization, i.e., C = 0. As a result the local log-partition functions are unrelated, and efficient counting algorithm can be used for learning. In [3] a different approximated program was derived for c? = 1, cv = 0 which was solved by the BFGS convex solver. Our work is different as it considers efficient algorithms for approximated structured prediction which take advantage of the graphical model by sending messages along its edges. We show in the experiments that this significantly improves the run-time of the algorithm. Also, our approximated structured prediction includes as special cases approximated CRF, for  = 1, and approximated structured SVM, for  = 0. Moreover, we describe how to smoothly approximate the structured SVMs to avoid the shortcomings of subgradient methods, by simply setting  ? 0 . Some forms of approximated structured SVMs were dealt in [19] with the structured SMO algorithm. Independently, [14] presented an approximated structured SVMs program and a message passing algorithm, which reduce to Theorem 1 and Lemma 1 with  = 0 and c? = 1, cv = 1. However, in this algorithm the messages are not guaranteed to be bounded. They main difference of [14] from our work is that they lack the dual formulation, which we use to prove that the structured SVM smooth approximation, with  ? 0, is guaranteed to converge to optimum and that the dual variables, i.e. the beliefs, are guaranteed to converge to the optimal beliefs. The relation between the margin and the soft-max is similar to the one used in [17]. Independently, [4, 15] described the connection between structured SVMs loss and CRFs loss. [15] also presented the one-parameter extension of CRFs and structured SVMs described in (3). 8 Conclusion and Discussion In this paper we have related CRFs and structured SVMs and shown that the soft-max, a variant of the log-partition function, approximates smoothly the structured SVM hinge loss. We have also proposed an approximation for structured prediction problems based on local entropy approximations and derived an efficient message-passing algorithm that is guaranteed to converge, even for general graphs. We have demonstrated the effectiveness of our approach to learn graphs with large number of parameters.We plan to investigate other domains of application such as image segmentation. 8 References [1] M. Collins, A. Globerson, T. Koo, X. Carreras, and P.L. Bartlett. Exponentiated gradient algorithms for conditional random fields and max-margin markov networks. JMLR, 9:1775?1822, 2008. [2] T. Finley and T. Joachims. Training structural SVMs when exact inference is intractable. In ICML, pages 304?311. ACM, 2008. [3] V. Ganapathi, D. Vickrey, J. Duchi, and D. Koller. Constrained approximate maximum entropy learning of markov random fields. In UAI, 2008. [4] K. Gimpel and N.A. Smith. Softmax-margin CRFs: Training log-linear models with cost functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 733?736. Association for Computational Linguistics, 2010. [5] T. Hazan and A. Shashua. Norm-Product Belief Propagation: Primal-Dual Message-Passing for Approximate Inference. Arxiv preprint arXiv:0903.3127, 2009. [6] T. Hazan and R. Urtasun. Approximated Structured Prediction for Learning Large Scale Graphical Models. Arxiv preprint arXiv:1006.2899, 2010. [7] T. Heskes. Convexity arguments for efficient minimization of the Bethe and Kikuchi free energies. Journal of Artificial Intelligence Research, 26(1):153?190, 2006. [8] J.K. Johnson, D.M. Malioutov, and A.S. Willsky. Lagrangian relaxation for MAP estimation in graphical models. In Proceedings of the Allerton Conference on Control, Communication and Computing. Citeseer, 2007. [9] S. Kumar and M. Hebert. Discriminative Fields for Modeling Spatial Dependencies in Natural Images. In Neural Information Processing Systems. MIT Press, Cambridge, MA, 2003. [10] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282?289, 2001. [11] G. Lebanon and J. Lafferty. Boosting and maximum likelihood for exponential models. NIPS, 1:447?454, 2002. [12] A. Levin and Y. Weiss. Learning to Combine Bottom-Up and Top-Down Segmentation. In European Conference on Computer Vision, 2006. [13] T. Meltzer, A. Globerson, and Y. Weiss. Convergent message passing algorithms-a unifying view. In UAI, 2009. [14] O. Meshi, D. Sontag, T. Jaakkola, and A. Globerson. Learning Efficiently with Approximate Inference via Dual Losses. In Proc. ICML. Citeseer, 2010. [15] P. Pletscher, C. Ong, and J. Buhmann. Entropy and Margin Maximization for Structured Output Learning. Machine Learning and Knowledge Discovery in Databases, pages 83?98, 2010. [16] N. Ratliff, J.A. Bagnell, and M. Zinkevich. Subgradient methods for maximum margin structured learning. In ICML Workshop on Learning in Structured Output Spaces, 2006. [17] F. Sha and L.K. Saul. Large margin hidden Markov models for automatic speech recognition. Advances in neural information processing systems, 19:1249, 2007. [18] C. Sutton and A. McCallum. Piecewise training for structured prediction. Machine Learning, 77(2):165? 194, 2009. [19] B. Taskar. Learning structured prediction models: a large margin approach. PhD thesis, Stanford, CA, USA, 2005. Adviser-Koller, Daphne. [20] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In UAI, pages 895?902. Citeseer, 2002. [21] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. NIPS, 16:51, 2004. [22] B. Taskar, S. Lacoste-Julien, and M. I. Jordan. Structured prediction, dual extragradient and Bregman projections. JMLR, 7:1653?1684, 2006. [23] S. Vishwanathan, N. Schraudolph, M. Schmidt, and K. Murphy. Accelerated Training of Conditional Random Fields with Stochastic Meta-Descent . In International Conference in Machine Learning, 2006. [24] M.J. Wainwright. Estimating the Wrong Graphical Model: Benefits in the Computation-Limited Setting. JMLR, 7:1859, 2006. [25] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. R in Machine Learning, 1(1-2):1?305, 2008. Foundations and Trends [26] J.S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. Transactions on Information Theory, 51(7):2282?2312, 2005. 9
3913 |@word version:1 norm:10 citeseer:3 sgd:5 ours:1 outperforms:3 existing:2 written:1 parsing:1 numerical:1 chicago:2 partition:10 update:3 stationary:3 intelligence:1 prohibitive:2 mccallum:2 vanishing:1 smith:1 iterates:3 boosting:1 node:6 allerton:1 daphne:1 unbounded:1 along:1 consists:1 prove:1 fitting:1 combine:1 manner:1 expected:1 behavior:1 nor:2 kpp:3 freeman:1 decreasing:2 solver:2 begin:1 estimating:1 moreover:6 bounded:5 unrelated:1 factorized:1 finding:1 guarantee:3 every:13 concave:9 runtime:2 exactly:2 wrong:2 control:1 yn:2 segmenting:1 positive:1 before:1 local:7 despite:1 sutton:1 koo:1 might:1 limited:1 range:3 unique:1 practical:1 globerson:3 testing:1 union:1 block:3 differs:1 area:1 empirical:2 significantly:4 matching:1 projection:1 suggest:2 get:5 cannot:7 optimize:2 equivalent:1 map:3 imposed:1 lagrangian:2 crfs:25 maximizing:1 demonstrated:1 zinkevich:1 independently:4 convex:8 rule:2 deriving:1 stability:1 coordinate:8 exact:2 us:1 element:6 trend:1 approximated:41 expensive:2 recognition:1 ising:1 database:1 bottom:1 role:1 taskar:4 preprint:2 solved:2 cycle:7 decrease:3 principled:1 convexity:2 complexity:1 ideally:1 ong:1 trained:2 tight:1 bipartite:1 efficiency:1 easily:1 joint:1 chapter:1 derivation:1 train:1 forced:1 describe:5 shortcoming:1 kp:2 artificial:1 labeling:2 modular:1 widely:1 solve:2 larger:2 loglikelihood:1 stanford:1 otherwise:1 obviously:1 sequence:7 advantage:2 propose:6 interaction:3 product:3 maximal:1 till:1 flexibility:1 mixing:1 intuitive:3 convergence:7 optimum:1 extending:1 tti:2 converges:4 object:2 kikuchi:1 derive:7 ij:1 involves:1 direction:3 stochastic:3 human:1 violates:1 meshi:1 require:1 abbeel:1 extension:3 pl:2 rurtasun:1 hold:1 around:1 considered:1 exp:10 predict:1 claim:7 estimation:1 proc:1 label:22 minimization:2 mit:1 gaussian:6 aim:2 i3:2 avoid:1 jaakkola:1 derived:2 focus:1 joachim:1 likelihood:6 contrast:4 baseline:5 inference:13 dependent:1 stopping:1 typically:2 hidden:1 relation:5 koller:4 subroutine:3 i1:4 pixel:5 dual:33 among:1 adviser:1 plan:1 raised:1 special:4 art:1 constrained:1 marginal:4 field:9 construct:2 equal:4 softmax:1 spatial:1 runtimes:1 yu:1 icml:4 minimized:1 simplex:3 report:2 piecewise:1 few:4 randomly:1 composed:1 resulted:2 individual:1 murphy:1 argmax:1 connects:1 message:17 investigate:2 evaluation:2 argmaxy:1 mixture:3 yielding:1 primal:13 implication:1 accurate:1 bregman:1 edge:6 unless:1 tree:2 penalizes:1 re:1 instance:1 increased:1 soft:11 modeling:1 maximization:1 loopy:3 cost:1 deviation:2 vertex:4 subset:2 levin:1 johnson:1 gr:3 too:1 dependency:1 corrupted:4 considerably:1 international:1 probabilistic:2 thesis:1 choose:1 dr:2 worse:2 resort:1 american:1 bx:11 ganapathi:1 account:3 potential:2 bfgs:6 summarized:1 includes:1 north:1 explicitly:1 performed:2 view:1 hazan:4 shashua:1 yv:35 denoised:1 minimize:2 accuracy:2 efficiently:16 correspond:1 dealt:1 emphasizes:1 marginally:1 malioutov:1 reach:1 sharing:2 whenever:5 definition:1 energy:2 pp:1 naturally:1 proof:6 dataset:2 proved:1 knowledge:1 improves:1 segmentation:3 appears:1 supervised:2 wei:3 formulation:6 done:3 evaluated:1 correlation:1 propagation:7 lack:1 quality:2 usa:1 true:4 multiplier:1 regularization:7 hence:3 i2:2 vickrey:1 deal:2 criterion:2 generalized:1 crf:7 demonstrate:1 duchi:1 image:16 ranging:2 variational:1 recently:1 common:1 exponentially:5 insensitive:1 association:2 approximates:3 marginals:1 refer:1 cambridge:1 cv:17 rd:1 automatic:1 grid:3 heskes:1 similarly:3 language:2 base:5 carreras:1 optimizing:2 meta:2 binary:2 life:1 guestrin:1 minimum:4 parallelized:1 converge:15 monotonically:4 maximummargin:1 full:1 reduces:5 smooth:7 schraudolph:1 prediction:53 variant:3 converging:1 vision:1 arxiv:4 iteration:8 normalization:1 represent:1 bimodal:6 lbp:4 want:1 decreased:1 rest:2 unlike:3 lafferty:2 effectiveness:2 jordan:2 structural:1 presence:1 counting:1 intermediate:2 easy:1 meltzer:1 misclassifications:3 reduce:2 idea:1 bartlett:1 sontag:1 speech:1 passing:14 repeatedly:1 clear:1 involve:2 amount:1 concentrated:1 svms:27 generate:1 sign:2 clarity:1 lacoste:1 graph:14 subgradient:6 relaxation:1 sum:1 run:3 raquel:1 extends:1 family:1 reader:1 bit:1 bound:1 smd:7 guaranteed:17 convergent:1 annual:1 i4:3 occur:1 vishwanathan:1 aspect:1 argument:1 min:4 kumar:1 px:12 gpus:1 conjecture:1 structured:97 poor:1 conjugate:1 across:2 describes:1 smaller:2 slightly:1 appealing:3 intuitively:3 taken:1 computationally:6 ln:10 agree:1 previously:1 describing:2 turn:1 know:1 sending:1 gaussians:2 yedidia:1 observe:1 schmidt:1 top:1 remaining:1 cf:1 ensure:2 running:1 graphical:14 include:1 hinge:9 linguistics:2 unifying:1 exploit:1 k1:1 approximating:2 objective:21 move:1 noticed:1 flipping:1 costly:1 sha:1 bagnell:1 surrogate:2 gradient:12 considers:2 urtasun:2 trivial:1 willsky:1 code:1 minimizing:5 equivalently:2 executed:1 statement:1 relate:4 holding:2 negative:5 slows:1 ratliff:1 perform:1 upper:1 markov:5 descent:12 relational:1 communication:1 y1:2 arbitrary:2 ttic:2 pair:4 namely:4 optimized:1 connection:1 smo:1 learned:3 nip:2 able:2 usually:3 below:1 program:21 max:25 including:1 belief:19 wainwright:2 power:1 natural:2 rely:1 regularized:5 predicting:5 buhmann:1 zr:1 pletscher:1 representing:1 improve:1 technology:1 gimpel:1 julien:1 finley:1 extract:1 prior:1 discovery:1 loss:24 fully:1 foundation:1 incurred:1 summary:1 free:2 hebert:1 bias:1 pseudolikelihood:1 exponentiated:1 wide:1 neighbor:1 saul:1 barrier:2 distributed:1 benefit:1 avoids:1 tamir:1 commonly:1 cope:1 transaction:1 lebanon:1 approximate:11 clique:1 dealing:1 global:4 overfitting:1 uai:3 tuples:1 discriminative:2 decomposes:1 additionally:1 learn:8 bethe:3 robust:1 m3ns:1 messagepassing:1 ca:1 investigated:1 european:1 constructing:1 domain:1 did:4 main:3 noise:11 denoise:1 fig:5 depicts:4 fashion:1 vr:7 sub:1 pereira:1 exponential:3 governed:1 jmlr:3 theorem:9 down:1 er:8 svm:13 intractable:1 workshop:1 adding:1 ci:1 phd:1 magnitude:2 margin:9 gap:1 mf:6 smoothly:4 entropy:19 cx:4 simply:2 saddle:3 lagrange:1 scalar:1 corresponds:6 satisfies:1 chance:1 acm:1 ma:1 conditional:7 goal:1 presentation:1 replace:2 shared:2 change:2 except:1 extragradient:1 denoising:4 lemma:6 duality:6 experimental:2 internal:1 support:2 collins:1 accelerated:1
3,217
3,914
Spatial and anatomical regularization of SVM for brain image analysis R?emi Cuingnet CRICM (UPMC/Inserm/CNRS), Paris, France Inserm - LIF (UMR S 678), Paris, France [email protected] Habib Benali Inserm - LIF, Paris, France [email protected] Marie Chupin CRICM, Paris, France [email protected] Olivier Colliot CRICM, Paris, France [email protected] Abstract Support vector machines (SVM) are increasingly used in brain image analyses since they allow capturing complex multivariate relationships in the data. Moreover, when the kernel is linear, SVMs can be used to localize spatial patterns of discrimination between two groups of subjects. However, the features? spatial distribution is not taken into account. As a consequence, the optimal margin hyperplane is often scattered and lacks spatial coherence, making its anatomical interpretation difficult. This paper introduces a framework to spatially regularize SVM for brain image analysis. We show that Laplacian regularization provides a flexible framework to integrate various types of constraints and can be applied to both cortical surfaces and 3D brain images. The proposed framework is applied to the classification of MR images based on gray matter concentration maps and cortical thickness measures from 30 patients with Alzheimer?s disease and 30 elderly controls. The results demonstrate that the proposed method enables natural spatial and anatomical regularization of the classifier. 1 Introduction Brain image analyses have widely relied on univariate voxel-wise analyses, such as voxel-based morphometry (VBM) for structural MRI [1]. In such analyses, brain images are first spatially registered to a common stereotaxic space, and then mass univariate statistical tests are performed in each voxel to detect significant group differences. However, the sensitivity of theses approaches is limited when the differences are spatially complex and involve a combination of different voxels or brain structures [2]. Recently, there has been a growing interest in support vector machines (SVM) methods [3, 4] to overcome the limits of these univariate analyses. Theses approaches allow capturing complex multivariate relationships in the data and have been successfully applied to the individual classification of a variety of neurological conditions [5, 6, 7, 8]. Moreover, the output of the SVM can also be analyzed to localize spatial patterns of discrimination, for example by drawing the coefficients of the optimal margin hyperplane (OMH) ? which, in the case of a linear SVM, live in the same space as the MRI data [7, 8]. However, one of the problems with analyzing directly the OMH coefficients is that the corresponding maps are scattered and lack spatial coherence. This makes it difficult to give a meaningful interpretation of the maps, for example to localize the brain regions altered by a given pathology. In this paper, we address this issue by proposing a framework to introduce spatial consistency into SVMs by using regularization operators. Section 2 provides some background information on SVMs 1 and regularization operators. We then show that the regularization operator framework provides a flexible approach to model different types of proximity (section 3). Section 4 presents the first type of regularization, which models spatial proximity, i.e. two features are close if they are spatially close. We then present in section 5 a more complex type of constraint, called anatomical proximity. In the latter case, two features are considered close if they belong to the same brain network; for instance two voxels are close if they belong to the same anatomical or functional region or if they are anatomically or functionally connected (based on fMRI networks or white matter tracts). Finally, in section 6, the proposed framework is illustrated on the analysis of MR images using gray matter concentration maps and cortical thickness measures from 30 patients with AD and 30 elderly controls from the ADNI database (www.adni-info.org). 2 Priors in SVM In this section, we first describe the neuroimaging data that we consider in this paper. Then, after some background on SVMs and on how to add prior knowledge in SVMs, we describe the framework of regularization operators. 2.1 Brain imaging data In this contribution, we consider any feature computed either at each voxel of a 3D brain image or at any vertex of the cortical surface. Typically, for anatomical studies, the features could be tissue concentration maps such as gray matter (GM) or white matter (WM) for the 3D case or cortical thickness maps for the surface case. The proposed methods are also applicable to functional or diffusion weighted MRI. We further assume that 3D images or cortical surfaces were spatially normalized to a common stereotaxic space (e.g. [9]) as in many group studies or classification methods [5, 6, 7, 8, 10]. Let V be the domain of the 3D images or surfaces. v will denote an element of V (i.e. a voxel or a vertex). Thus, X = RV , together with the canonical dot product will be the input space. Let xs ? X be the data of a given subject s. In the case of 3D images, xs can be considered in two different ways: (i) as an element of Rd where d denotes the number of voxels, (ii) as a real-valued function defined on a compact subset of R3 . Both finite and continuous viewpoints will be studied in this paper because they allow different types of regularization. Similarly, in the surface case, xs can be viewed either as an element of Rd where d denotes the number of vertices or as a real-valued function on a 2-dimensional compact Riemannian manifold. We consider a group of N subjects with their corresponding data (xs )s?[1,N ] ? X N . Each subject is associated with a group (ys )s?[1,N ] ? {?1, 1}N (typically his diagnosis, i.e. diseased or healthy). 2.2 Linear SVM The linear SVM solves the following optimization problem [3, 4, 11]: N X  wopt , bopt = arg min lhinge (ys [hw, xs i + b]) + ? k w k2 (1) w?X ,b?R s=1 where ? ? R+ is the regularization parameter and lhinge the hinge loss function defined as: lhinge : u ? R ? 7 max(0, 1 ? u). With a linear SVM, the feature space is the same as the input space. Thus, when the input features are the voxels of a 3D image, each element of wopt = (wvopt )v?V also corresponds to a voxel. Similarly, for the surface-based methods, the elements of wopt can be represented on the vertices of the cortical surface. To be anatomically consistent, if v (1) ? V and v (2) ? V are close according opt to the topology of V, their weights in the SVM classifier, wvopt (1) and wv (2) respectively, should be similar. In other words, if v (1) and v (2) correpond to two neighboring regions, they should have a similar role in the classifier function. However, this is not guaranteed with the standard linear SVM (as for example in [7]) because the regularization term is not a spatial regularization. The aim of the present paper is to propose methods to ensure that wopt is spatially regularized. 2 2.3 How to include priors in SVM To spatially regularize the SVM, one has to include some prior knowledge on the proximity of features. In the literature, three main ways have been considered in order to include priors in SVMs. In an SVM, all the information used for classification is encoded in the kernel. Hence, the first way to include prior is to directly design the kernel function [4]. But this implies knowing a metric on the input space X consistent with the prior knowledge. Another way is to force the classifier function to be locally invariant to some transformations. This can be done: (i) by directly engineering a kernel which leads to locally invariant SVM, (ii) by generating artificially transformed examples from the training set to create virtual support vectors (virtual SV), (iii) by using a combination of both these approaches called kernel jittering [12, 13, 14]. But the main difficulty here is how to define the transformations to which we would like the kernel to be invariant. The last way is to consider SVM from the regularization viewpoint [15, 4]. The idea is to force the classifier function to be smooth with respect to some criteria. This is the viewpoint which is adopted in this paper. 2.4 Regularization operators Our aim is to introduce a spatial regularization on the classifier function of the SVM which can be written as sgn (f (xs ) + b) where f ? RX . This is done through the definition of a regularization operator P on f . Following [15, 4], P is defined as a linear map from a space F ? RX into a dot product space (D, h?, ?iD ). G : X ? X ? R is a Green?s function of a regularization operator P iff: ?f ? F, ?x ? X , f (x) = hP (G(x, ?), P (f )iD (2) If P admits at least a Green?s function called G, then G is a positive semi-definite kernel and the minimization problem: N X  lhinge (ys [f (xs ) + b]) + ? k P (f ) k2D (3) f opt , bopt = arg min f ?F ,b?R s=1 is equivalent to the SVM minimization problem with kernel G. Since in linear SVM, the feature space is the input space, f lies in the input space. Therefore, the optimisation problem (3) is very convenient to include spatial regularization on f via the definition of P . Note that, usually, F is a Reproducing Kernel Hilbert Space (RKHS) with kernel K and D = F. Hence, if P is bounded, injective and compact, P admits a Green?s function G = (P ? P )?1 K where P ? denotes the adjoint of P . One has to define the regularization operator P so as to obtain the suitable regularization for the problem. 3 Laplacian regularization Spatial regularization requires the notion of proximity between elements of V. This can be done through the definition of a graph in the discrete case or a metric in the continuous case. In this section, we propose spatial regularizations based on the Laplacian for both of these proximity models. This penalizes the high-frequency components with respect to the topology of V. 3.1 Graphs When V is finite, weighted graphs are a natural framework to take spatial information into consideration. Voxels of a brain image can be considered as nodes of a graph which models the voxels? proximity. This graph can be the voxel connectivity (6, 18 or 26) or a more sophisticated graph. We chose the following regularization operator:  1 ? P : w? ? F = L(RV , R) 7? e 2 ?L w ? F 3 (4) where L denotes the graph Laplacian [16] and w? the dual vector of w. ? controls the size of the regularization. The optimization problem then becomes: N X 1 (wopt , bopt ) = arg min lhinge (ys [hw, xs i + b]) + ? k e 2 ?L w k2 (5) w?X ,b?R s=1 Such a regularization exponentially penalizes the high-frequency components and thus forces the classifier to consider as similar voxels highly connected according to the graph adjacency matrix. According to the previous section, this new minimization problem (5) is equivalent to an SVM optimization problem. The new kernel K? is given by: K? (x1 , x2 ) = xT1 e??L x2 (6) This is a heat or diffusion kernel on a graph. Our approach differs from the diffusion kernels introduced by Kondor et al. [17] because the nodes of the graph are the features, here the voxels, whereas in [17], the nodes were the objects to classify. Laplacian regularization was also used in satellite imaging [18] but, again, the nodes were the objects to classify. Our approach can also be considered as a spectral regularization on the graph [19]. To our knowledge, such spectral regularization has not been applied to brain images but only to the classification of microarray data [20]. 3.2 Compact Riemannian manifolds In this paper, when V is continuous, it can be considered as a 2-dimensional (e.g. surfaces) or a 3-dimensional (e.g. 3D Euclidean or more complex) compact Riemannian manifold. The metric then models the notion of proximity. On such spaces, the heat kernel exists [21, 22]. Therefore, the Laplacian regularization presented in the previous paragraph can be extended to compact Riemannian manifolds [22]. Similarly to the graphs, we chose the following regularization operator:  1 ? P : w? ? F = L(RV , R) 7? e 2 ?? w ? F (7) where ? denotes the Laplace-Beltramin operator. The optimization problem is also equivalent to an SVM optimization problem with kernel K? (x1 , x2 ) = xT1 e??? x2 . Note the difference between our approach and that of Laferty and Lebanon [22]. In our case, the points of the manifolds are the features, whereas in [22], they were the objects to classify. In sections 4 and 5, we present different types of proximity models which correspond to different types of graphs or distances. 4 Spatial proximity In this section, we consider the case of regularization based on spatial proximity, i.e. two voxels (or vertices) are close if they are spatially close. 4.1 The 3D case When V are the image voxels (discrete case), the simplest option to encode the spatial proximity is to use the image connectivity (e.g. 6-connectivity) as a regularization graph. Similarly, when V is a compact subset of R3 (continuous case), the proximity is encoded by a Euclidean distance. In both cases, this?is equivalent to pre-process the data with a Gaussian smoothing kernel with standard deviation ? = ? [17]. However, smoothing the data with a Gaussian kernel would mix gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). Instead, we propose a graph which takes into consideration both the spatial localization and the tissue types. Based on tissue probability maps, in each voxel v, we have the set of probabilities pv that this voxel belongs to GM, WM or CSF. We considered the following graph. Two voxels are connected if and only if they are neighbors in the image (6-connectivity). The weight au,v of the edge between two connected voxels u and v is 2 2 au,v = e?d?2 (pu ,pv ) /(2? ) , where d?2 is the ?2 -distance between two distributions. We chose beforehand ? equal to the standard deviation of d?2 (pu , pv ). To compute the kernel, we computed e??L xs for each subject s in the training set by scaling the Laplacian and using the Taylor series expansion. 4 4.2 The surface case The connectivity graph is not directly applicable to surfaces. Indeed, the regularization would then strongly depend on the mesh used to discretize the surface. This shortcoming can be overcome by reweighing the graph with conformal weights. In this paper, we chose a different approach by adopting the continuous viewpoint: we consider the cortical surface as a 2-dimensional Riemannian manifold and use the regularization operator defined by equation (7). Indeed, the Laplacian is an intrinsic operator and does not depend on the chosen surface parameterization. The heat kernel has already been used for cortical smoothing for example in [23, 24, 25, 26]. We will therefore not detail this part. We used the implementation described in [26]. 5 Anatomical proximity In this section, we consider a different type of proximity, which we call anatomical proximity. Two voxels are considered close if they belong to the same brain network. For example, two voxels can be close if they belong to the same anatomical or functional region (defined for example by a probabilistic atlas). This can be seen as a ?short-range? connectivity. Another example is that of ?long-range? proximity which models the fact that distant voxels can be anatomically (through white matter tracts) or functionally connected (based on fMRI networks). We first focus on the discrete case. The presented framework can be used either for 3D images or surfaces and computed very efficiently. However, such an efficient implementation was obtained at the cost of the spatial proximity. Therefore, we then show a continuous formulation which enables to consider both spatial and anatomical proximity. 5.1 On graphs: atlas and connectivity Let (A1 , ? ? ? , AR ) be the R regions of interest (ROI) of an atlas and p(v ? Ar ) the probability that the voxel v belongs to region Ar . Then the probability that two voxels v (i) and v (j)  PR 2 (i) (j) belong to the same region is: ? Ar . We assume that if v (i) 6= v (j) then: v ,v r=1p    (j) 2 (i) (i) (j) ? Ar = p v ? Ar p v ? Ar . Let E ? Rd?R be the right stochastic matrix p v ,v  defined by: Ei,r = p v (i) ? Ar . Then, for i 6= j, the (i, j)-th entry of the adjacency matrix EE t is the probability that the voxels v (i) and v (j) belong to the same regions. For ?long-range? connections (structural or functional), one can consider an R-by-R matrix C with the (r1 , r2 )-th entry being the probability that Ar1 and Ar2 are connected. Then the adjacency ? [16], to be sure that the two matrix becomes: ECE t . We considered the normalized Laplacian L terms commute: ? = Id ? D? 12 ECE t D? 12 (8) L where D is a diagonal matrix. Hence, if CE t D?1 E is not singular, we have: i h t ?1 1 1 ? e?? L = e?? Id + D? 2 E(e?CE D E ? IR )(CE t D?1 E)?1 CE t D? 2 (9) 1 The computation requires only the computation of D? 2 , which is done efficiently since D is a diagonal matrix, and the computation of inverse and the matrix exponential of an R-by-R matrix, which is also efficient since R ? 102 . This method can be directly applied to both 3D images and cortical surfaces. Unfortunately, the efficient implementation was obtained at the cost of the spatial proximity. The next section presents a combination of anatomical and spatial proximity using the continuous viewpoint. 5.2 On statistical manifolds In this section, the goal is to take into account various prior informations such as tissue information, atlas information and spatial proximity. We first show that this can be done by considering the images or surfaces as statistical manifolds together with the Fisher metric. We then give some details about the computation of the kernel. 5 Fisher metric We assume that we are given an anatomical or a functional atlas A composed of R regions: {Ar }r=1???R . Similarly, T = {TGM , TWM , TCSF } denotes the set of brain tissues. In each point v ? V, we have a probability distribution patlas (?|v) ? RT ?A which informs about the tissue type and the atlas region in v. Without any loss of generality, one can assume that the tissue information is encoded in the atlas. Therefore, we consider the probability patlas (?|v) ? RA . We also consider a probability distribution ploc (?|v) ? RV which encodes the spatial proxim2 ity. A simple example is ploc (?|v) ? N (v, ?loc ). Therefore, we consider the probability family:  A?V M = p(?|v) ? R where p(?|v) = p atlas (?|v)ploc (?|v). v?V A natural way to encode proximity on M is to use the Fisher metric as in [22]. With some smoothness assumption about p, M together with this metric is a compact Riemannian manifold [27]. For clarity, we present this framework only for 3D images but it could be applied to cortical surfaces with minor changes. The metric tensor g is then given for all v ? V by:   ? log p(?|v) ? log p(?|v) gij (v) = Ev , 1 ? i, j ? 3 (10) ?vi ?vj If we further assume that ploc (?|v) is isotropic we have:  2 Z ? log ploc (u|v) atlas du gij (v) = gij (v) + ?ij ploc (u|v) ?vi u?V (11) where ?ij is the Kronecker delta and g atlas is the metric tensor when p(?|v) = patlas (?|v). When ?ij atlas 2 I3 ), we have: gij (v) = gij (v) + 2 . ploc (?|v) ? N (v, ?loc ?loc Computing the kernel Once the notion of proximity is defined, one has to compute the kernel matrix. The computation of the kernel matrix requires the computation of e??? xs for all the subjects of the training set. The eigendecomposition of the Laplace-Beltrami operator is intractable since the number of voxels in a brain images is about 106 . Hence e??? xs is considered as the solution at t = ? of the heat equation with the Dirichlet homogeneous boundary conditions:  ?u ?t ? ?u = 0 (12) u(t = 0) = xs ! 3 3 X p 1 X ? ?u hij det g The Laplace-Beltrami operator is given by [21]: ?u = ? ?vi det g j=1 ?vj i=1 where h is the inverse tensor of g. To solve equation (12), one can use a variational approach [28]. We used the rectangular finite elements in space and the explicit finite difference scheme for the time discretization. ?x and ?t denote the space step and the time step respectively. ?x is fixed by the MRI spatial resolution. ?t is then chosen so as to respect the Courant-Friedrichs-Lewy (CFL) condition, which can be written in this case as: ?t ? 2(max ?i )?1 , where ?i are the eigenvalues of the general eigenproblem: KU = ?MU with K the stiffness matrix and M the mass matrix. To compute the optimal time step ?t , we estimated the largest eigenvalue with the power iteration method. 6 Experiments and results 6.1 Material Subjects and MRI acquisition Data were obtained from the Alzheimer?s Disease Neuroimaging Initiative (ADNI) database 1 . The Principal Investigator of this initiative is Michael W. Weiner, M.D., VA Medical Center and University of California - San Francisco.For up-to-date information see www.adni-info.org. We studied 30 patients with probable AD (age? standard-deviation (SD) = 74?4, range = 60-80 years, mini-mental score (MMS) = 23?2) and 30 elderly controls (age? SD = 73?4, range = 60-80, MMS = 29?1) which were selected from the ADNI database according to the 1 www.loni.ucla.edu/ADNI 6 following criteria. Subjects were excluded if their scan revealed major artifacts or gross structural abnormalities of the white matter, for it makes the tissue segmentation step fail. 80-year-old subjects or older were also excluded. The MR scans are T1-weighted MR images. MRI acquisition was done according to the ADNI acquisition protocol in [29]. Features extraction For the 3D image analyses, all T1-weighted MR images were segmented into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) using the SPM5 (Statistical Parametric Mapping, London, UK) unified segmentation routine [30] and spatially normalized with DARTEL [9]. The features are the GM probability maps in the MNI space. For the surface-based analyses, the features are the cortical thickness values at each vertex of the cortical surface. Cortical thickness measures were performed with Freesurfer (Massachusetts General Hospital, Boston, MA). 6.2 Proposed experiments As an illustration of the method, we present the results of the AD versus controls analysis. We present the maps associated to the optimal margin hyperplane (OMH). The classification function obtained with a linear SVM is the sign of the inner product of the features with wopt , a vector orthogonal to the OMH [3, 4]. Therefore, if the absolute value of the ith component of wopt , |wiopt |, is small compared to the other components (|wjopt |)j6=i , the ith feature will have a small influence on the classification. Conversely, if |wiopt | is relatively large, the ith feature will play an important role in the classifier. Thus the optimal weights wopt allow us to evaluate the anatomical consistency of the classifier. In all experiments, the C parameter of the SVM was fixed to one (? = 2N1 C [4]). 6.3 Results: spatial proximity In this section, we present the results for the spatial proximity in the 3D case (method presented in section 4.1). Due to space limitations, the surface case is not presented. Fig. 1(a) presents the OMH when no spatial regularization is performed. Fig. 1(b) shows the results with spatial proximity but without tissue probability maps. w becomes smoother and spatially consistent. However it mixes tissues and does not respect the topology of the cortex. For instance, it mixes tissues of the temporal lobe with tissues of the frontal and parietal lobes. The results with both spatial proximity and tissue maps are shown on Fig. 1(c). The OMH is much more consistent with the brain anatomy. ? controls the size of the spatial regularization and was chosen to be equivalent to a 4mm-FWHM of the Gaussian smoothing. The classification accuracy was estimated by a leave-one-out cross validation. The classifiers were able to distinguish AD from CN with similar accuracies (83% with no spatial priors and 85% with spatial priors). 6.4 Results: anatomical proximity In this section, we present the results for the anatomical proximity. We first present the discrete surface case. The discrete 3D case leads to comparable results but is omitted here due to space limitations. We then present the continuous 3D case. Extension to surfaces is left for future work. Discrete case For the discrete case, we used ?short-range? proximity, defined by the cortical atlas of Desikan et al. [31] with binary probabilities. We tested different values for ? = 0, 1, ? ? ? , 5. The accuracies ranged between 80% and 85%. The highest accuracy was reached for ? = 3. The optimal SVM weights w are shown on Fig. 2. When no regularization has been carried out, they are noisy and scattered (Fig. 2 (a)). When the amount of regularization is increased, voxels of a same region tend to be considered as similar by the classifier (Fig. 2(b-d)). Note how the anatomical coherence of the OMH varies with ?. Continuous case We then present the results of the 3D continuous case (section 5.2). The atlas information used was only the tissue types. We chose ?loc = 10mm for the spatial confidency. ? was chosen to be equivalent to a 4mm-FWHM of the Gaussian smoothing. The classifier reached 87% accuracy. The optimal SVM weights w are shown on Fig. 1(d). The tissue knowledge enables the classifier to be more consistent with the anatomy. For instance, note the difference with the Gaussian smoothing (Fig. 1(b)) and how the proposed method avoids mixing the temporal lobe with the parietal and frontal lobes. 7 -0.5 (a) -0.05 +0.05 (b) +0.5 (c) (d) Figure 1: Normalized w coefficients: (a) no spatial prior, (b) spatial proximity: FWHM=4mm, (c) spatial proximity and tissues: FWHM?4mm, (d) Fisher metric using tissue maps. -0.5 (a) (b) 0 0 (c) +0.5 (d) Figure 2: Normalized w of the left hemisphere when the SVM is regularized with a cortical atlas [31]: (a) ? = 0 (no prior), (b) ? = 1, (c) ? = 2, (d) ? = 3. 7 Discussion In this contribution, we proposed to use regularization operators to add spatial consistency to SVMs for brain image analysis. We show that this provides a flexible approach to model different types of proximity between the features. We proposed derivations for both 3D image features, such as tissue maps, or surface characteristics, such as cortical thickness. We considered two different types of formulations: a discrete viewpoint in which the proximity is encoded via a graph, and a continuous viewpoint in which the data lies on a Riemannian manifold. In particular, the latter viewpoint is useful for surface cases because it overcomes problems due to surface parameterization. This paper introduced two different types of proximity. We first considered the case of regularization based on spatial proximity, which results in spatially consistent OMH making their anatomical interpretation more meaningful. We then considered a different type of proximity which allows modeling higherlevel knowledge, which we call anatomical proximity. In this model, two voxels are considered close if they belong to the same brain network. For example, two voxels can be close if they belong to the same anatomical region. This can be seen as a ?short-range? connectivity. Another example is that of ?long-range? proximity which models the fact that distant voxels can be anatomically connected, through white matter tracts, or functionally connected, based on fMRI networks. Preliminary evaluation was performed on 30 patients with AD and 30 age-matched controls. The results demonstrate that the proposed approaches allow obtaining spatially and anatomically coherent discrimination patterns. In particular, the obtained hyperplanes are largely consistent with the neuropathology of AD, with highly discriminant features in the medial temporal lobe, as well as lateral temporal, parietal associative and frontal areas. As for the classification results, they were comparable to those reported in the literature for AD classification (e.g. [5, 8, 7]). The use of regularization did not substantially improve the accuracy. However, the most important point is that the proposed approach makes the results more consistent with the anatomy, making their interpretation more meaningful. Finally, it should be noted that the proposed approach is not specific to structural MRI, and can be applied to other pathologies and other types of data (e.g. functional or diffusion-weighted MRI). Acknowledgments This work was supported by ANR (project HM-TC, number ANR-09-EMER-006). Data collection and sharing for this project was funded by the Alzheimer?s Disease Neuroimaging Initiative (ADNI; Principal Investigator: Michael Weiner; NIH grant U01 AG024904). ADNI data are disseminated by the Laboratory of Neuro Imaging at the University of California, Los Angeles. 8 References [1] J. Ashburner and K.J. Friston. Voxel-based morphometry?the methods. NeuroImage, 11(6):805?21, 2000. [2] C. Davatzikos. Why voxel-based morphometric analysis should be used with great caution when characterizing group differences. NeuroImage, 23(1):17?20, 2004. [3] V.N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995. [4] B. Sch?olkopf and A.J. Smola. Learning with Kernels. MIT Press, 2001. [5] Z. Lao et al. Morphological classification of brains via high-dimensional shape transformations and machine learning methods. NeuroImage, 21(1):46?57, 2004. [6] Y. Fan et al. COMPARE: classification of morphological patterns using adaptive regional elements. IEEE TMI, 26(1):93?105, 2007. [7] S. Kl?oppel et al. Automatic classification of MR scans in Alzheimer?s disease. Brain, 131(3):681?9, 2008. [8] P. Vemuri et al. Alzheimer?s disease diagnosis in individual subjects using structural MR images: validation studies. NeuroImage, 39(3):1186?97, 2008. [9] J. Ashburner et al. A fast diffeomorphic image registration algorithm. NeuroImage, 38(1):95?113, 2007. [10] O. Querbes et al. Early diagnosis of Alzheimer?s disease using cortical thickness: impact of cognitive reserve. Brain, 132(8):2036, 2009. [11] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge Univ Pr, 2004. [12] D. Decoste and B. Sch?olkopf. Training invariant support vector machines. Machine Learning, 46(1):161? 90, 2002. [13] B. Sch?olkopf et al. Incorporating invariances in support vector learning machines. In Proc. ICANN 1996, page 47. Springer Verlag, 1996. [14] B. Sch?olkopf et al. Prior knowledge in support vector kernels. In Proc. conference on Advances in neural information processing systems?97, pages 640?46. MIT Press, 1998. [15] A.J. Smola and B. Sch?olkopf. On a kernel-based method for pattern recognition, regression, approximation, and operator inversion. Algorithmica, 22(1/2):211?31, 1998. [16] F.R.K. Chung. Spectral Graph Theory. Number 92. AMS, 1992. [17] R. I. Kondor and J.D. Lafferty. Diffusion kernels on graphs and other discrete input spaces. In Proc. International Conference on Machine Learning, pages 315?22, 2002. [18] L. G?omez-Chova et al. Semi-supervised image classification with Laplacian support vector machines. IEEE Geo Rem Sens Let, 5(3):336?40, 2008. [19] A.J. Smola and R. Kondor. Kernels and regularization on graphs. In Proc. COLT, page 144. Springer Verlag, 2003. [20] F. Rapaport et al. Classification of microarray data using gene networks. BMC bioinformatics, 8(1):35, 2007. [21] J. Jost. Riemannian geometry and geometric analysis. Springer Verlag, 2008. [22] J. Lafferty and G. Lebanon. Diffusion kernels on statistical manifolds. JMLR, 6:129?63, 2005. [23] A. Andrade et al. Detection of fMRI activation using cortical surface mapping. Hum Brain Mapp, 12(2):79?93, 2001. [24] A. Cachia et al. A primal sketch of the cortex mean curvature: a morphogenesis based approach to study the variability of the folding patterns. IEEE TMI, 22(6):754?765, 2003. [25] M.K. Chung. Heat kernel smoothing and its application to cortical manifolds. Technical report, 1090. Department of Statistics, Univ of Wisconsin, Madison, 2004. [26] M.K. Chung et al. Cortical thickness analysis in autism with heat kernel smoothing. NeuroImage, 25(4):1256?65, 2005. [27] S.-I. Amari et al. Differential Geometry in Statistical Inference, volume 10. Institute of Mathematical Statistics, 1987. [28] O. Druet et al. Blow-up theory for elliptic PDEs in Riemannian geometry. Princeton Univ Pr, 2004. [29] C.R.Jr Jack et al. The Alzheimer?s Disease Neuroimaging Initiative (ADNI): MRI methods. J Magn Reson Imaging, 27(4):685?91, 2008. [30] J. Ashburner and K.J. Friston. Unified segmentation. NeuroImage, 26(3):839?51, 2005. [31] R. S. Desikan et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage, 31(3):968?980, 2006. 9
3914 |@word kondor:3 mri:10 inversion:1 lobe:5 commute:1 series:1 loc:4 score:1 rkhs:1 discretization:1 activation:1 written:2 mesh:1 distant:2 shape:1 enables:3 atlas:14 medial:1 discrimination:3 selected:1 parameterization:2 isotropic:1 ith:3 short:3 mental:1 provides:4 node:4 hyperplanes:1 org:2 bopt:3 mathematical:1 differential:1 initiative:4 paragraph:1 introduce:2 elderly:3 ra:1 indeed:2 subdividing:1 growing:1 brain:23 rem:1 decoste:1 considering:1 becomes:3 project:2 moreover:2 bounded:1 matched:1 mass:2 substantially:1 proposing:1 caution:1 unified:2 transformation:3 temporal:4 friedrichs:1 classifier:13 k2:2 uk:1 control:7 medical:1 grant:1 positive:1 t1:2 engineering:1 magn:1 sd:2 limit:1 consequence:1 analyzing:1 id:4 chose:5 umr:1 au:2 studied:2 conversely:1 limited:1 range:8 fwhm:4 acknowledgment:1 definite:1 differs:1 area:1 convenient:1 word:1 pre:1 close:11 operator:17 live:1 influence:1 www:3 equivalent:6 map:14 center:1 rectangular:1 resolution:1 regularize:2 his:1 ity:1 notion:3 laplace:3 reson:1 gm:5 play:1 olivier:2 homogeneous:1 element:8 recognition:1 database:3 role:2 region:13 connected:8 morphological:2 highest:1 disease:7 gross:1 mu:1 cristianini:1 jittering:1 depend:2 localization:1 various:2 represented:1 derivation:1 univ:3 heat:6 fast:1 describe:2 shortcoming:1 london:1 labeling:1 disseminated:1 encoded:4 widely:1 valued:2 solve:1 drawing:1 amari:1 anr:2 statistic:2 noisy:1 associative:1 higherlevel:1 eigenvalue:2 sen:1 propose:3 product:3 fr:4 neighboring:1 date:1 iff:1 mixing:1 adjoint:1 olkopf:5 los:1 satellite:1 r1:1 generating:1 tract:3 diseased:1 leave:1 object:3 chupin:2 informs:1 ij:3 minor:1 solves:1 implies:1 beltrami:2 csf:3 anatomy:3 stochastic:1 human:1 sgn:1 virtual:2 material:1 adjacency:3 preliminary:1 opt:2 probable:1 extension:1 mm:7 proximity:40 considered:15 roi:1 great:1 mapping:2 reserve:1 major:1 early:1 omitted:1 proc:4 applicable:2 healthy:1 largest:1 create:1 successfully:1 weighted:5 minimization:3 mit:2 gaussian:5 aim:2 i3:1 encode:2 focus:1 detect:1 diffeomorphic:1 am:1 inference:1 cnrs:1 typically:2 transformed:1 france:5 arg:3 issue:1 classification:15 flexible:3 dual:1 colt:1 spatial:40 lif:2 smoothing:8 equal:1 once:1 eigenproblem:1 extraction:1 bmc:1 fmri:4 future:1 report:1 composed:1 individual:2 algorithmica:1 geometry:3 n1:1 detection:1 interest:3 highly:2 cfl:1 evaluation:1 lhinge:5 introduces:1 analyzed:1 primal:1 beforehand:1 edge:1 injective:1 orthogonal:1 euclidean:2 taylor:2 penalizes:2 old:1 instance:3 classify:3 increased:1 modeling:1 ar:9 cost:2 geo:1 vertex:6 subset:2 deviation:3 entry:2 reported:1 upmc:3 thickness:8 varies:1 sv:1 international:1 sensitivity:1 probabilistic:1 michael:2 together:3 connectivity:8 thesis:2 again:1 cognitive:1 chung:3 account:2 blow:1 u01:1 coefficient:3 matter:12 ad:7 vi:3 performed:4 reached:2 wm:4 relied:1 option:1 cuingnet:2 tmi:2 benali:2 contribution:2 ir:1 accuracy:6 characteristic:1 efficiently:2 largely:1 correspond:1 rx:2 autism:1 j6:1 tissue:18 sharing:1 ashburner:3 definition:3 acquisition:3 frequency:2 associated:2 riemannian:9 massachusetts:1 knowledge:7 hilbert:1 segmentation:3 routine:1 sophisticated:1 desikan:2 courant:1 supervised:1 loni:1 formulation:2 done:6 ag024904:1 strongly:1 generality:1 smola:3 sketch:1 ei:1 lack:2 artifact:1 gray:5 normalized:5 ranged:1 regularization:43 hence:4 spatially:12 excluded:2 laboratory:1 illustrated:1 white:7 noted:1 criterion:2 mapp:1 demonstrate:2 image:31 wise:1 consideration:2 variational:1 recently:1 jack:1 nih:1 common:2 functional:6 exponentially:1 volume:1 cerebral:1 belong:8 interpretation:4 davatzikos:1 functionally:3 significant:1 cambridge:1 smoothness:1 rd:3 automatic:1 consistency:3 similarly:5 hp:1 pathology:2 shawe:1 dot:2 funded:1 cortex:3 surface:27 add:2 pu:2 curvature:1 multivariate:2 belongs:2 hemisphere:1 verlag:4 wv:1 binary:1 seen:2 mr:7 andrade:1 ii:2 rv:4 semi:2 mix:3 smoother:1 smooth:1 segmented:1 technical:1 adni:10 cross:1 long:3 y:4 a1:1 laplacian:10 va:1 impact:1 neuro:1 regression:1 jost:1 patient:4 metric:10 optimisation:1 iteration:1 kernel:32 adopting:1 folding:1 morphometry:2 background:2 whereas:2 singular:1 microarray:2 sch:5 regional:1 sure:1 subject:10 tend:1 lafferty:2 call:2 alzheimer:7 structural:5 ee:1 abnormality:1 revealed:1 iii:1 automated:1 variety:1 topology:3 inner:1 idea:1 cn:1 knowing:1 det:2 angeles:1 weiner:2 imed:2 useful:1 involve:1 amount:1 locally:2 svms:7 simplest:1 canonical:1 sign:1 delta:1 estimated:2 anatomical:19 diagnosis:3 discrete:9 group:6 ar1:1 localize:3 clarity:1 marie:2 ce:4 registration:1 diffusion:6 imaging:4 graph:23 year:2 inverse:2 family:1 coherence:3 scaling:1 comparable:2 capturing:2 wopt:8 guaranteed:1 distinguish:1 fan:1 mni:1 constraint:2 kronecker:1 x2:4 encodes:1 ucla:1 emi:1 min:3 relatively:1 department:1 according:5 combination:3 jr:1 increasingly:1 making:3 cerebrospinal:2 anatomically:5 invariant:4 pr:3 taken:1 equation:3 spm5:1 r3:2 fail:1 conformal:1 adopted:1 stiffness:1 spectral:3 elliptic:1 denotes:6 dirichlet:1 ensure:1 include:5 hinge:1 madison:1 tensor:3 already:1 hum:1 reweighing:1 parametric:1 concentration:3 rt:1 diagonal:2 distance:3 lateral:1 manifold:12 discriminant:1 relationship:2 mini:1 freesurfer:1 illustration:1 difficult:2 neuroimaging:4 unfortunately:1 hij:1 info:2 fluid:2 design:1 implementation:3 discretize:1 finite:4 parietal:3 extended:1 variability:1 emer:1 reproducing:1 vbm:1 morphogenesis:1 introduced:2 paris:5 kl:1 connection:1 california:2 coherent:1 registered:1 address:1 able:1 usually:1 pattern:7 ev:1 max:2 green:3 power:1 suitable:1 natural:3 force:3 regularized:2 difficulty:1 friston:2 scheme:1 altered:1 older:1 improve:1 lao:1 carried:1 hm:1 prior:13 voxels:22 literature:2 geometric:1 wisconsin:1 loss:2 limitation:2 k2d:1 ar2:1 versus:1 age:3 validation:2 eigendecomposition:1 integrate:1 rapaport:1 consistent:8 viewpoint:8 morphometric:1 supported:1 last:1 pdes:1 allow:5 institute:1 neighbor:1 characterizing:1 absolute:1 overcome:2 boundary:1 cortical:21 avoids:1 collection:1 adaptive:1 san:1 voxel:12 lebanon:2 compact:8 overcomes:1 inserm:3 gene:1 xt1:2 neuropathology:1 francisco:1 continuous:11 why:1 ku:1 nature:1 obtaining:1 expansion:1 du:1 complex:5 artificially:1 domain:1 vj:2 protocol:1 did:1 icann:1 main:2 x1:2 fig:8 scattered:3 neuroimage:8 pv:3 explicit:1 exponential:1 lie:2 jussieu:2 jmlr:1 hw:2 specific:1 r2:1 x:12 svm:27 admits:2 exists:1 intrinsic:1 intractable:1 vapnik:1 incorporating:1 margin:3 boston:1 tc:1 remi:1 univariate:3 omez:1 neurological:1 springer:4 corresponds:1 ma:1 viewed:1 goal:1 fisher:4 habib:2 change:1 vemuri:1 hyperplane:3 principal:2 called:3 gij:5 hospital:1 ece:2 invariance:1 meaningful:3 colliot:2 support:7 latter:2 scan:4 bioinformatics:1 frontal:3 investigator:2 evaluate:1 stereotaxic:2 princeton:1 tested:1
3,218
3,915
An Alternative to Low-Level-Synchrony-Based Methods for Speech Detection Javier R. Movellan University of California, San Diego Machine Perception Laboratory Atkinson Hall (CALIT2), 6100 9500 Gilman Dr., Mail Code 0440 La Jolla, CA 92093-0440 [email protected] Paul Ruvolo University of California, San Diego Machine Perception Laboratory Atkinson Hall (CALIT2), 6100 9500 Gilman Dr., Mail Code 0440 La Jolla, CA 92093-0440 [email protected] Abstract Determining whether someone is talking has applications in many areas such as speech recognition, speaker diarization, social robotics, facial expression recognition, and human computer interaction. One popular approach to this problem is audio-visual synchrony detection [10, 21, 12]. A candidate speaker is deemed to be talking if the visual signal around that speaker correlates with the auditory signal. Here we show that with the proper visual features (in this case movements of various facial muscle groups), a very accurate detector of speech can be created that does not use the audio signal at all. Further we show that this person independent visual-only detector can be used to train very accurate audio-based person dependent voice models. The voice model has the advantage of being able to identify when a particular person is speaking even when they are not visible to the camera (e.g. in the case of a mobile robot). Moreover, we show that a simple sensory fusion scheme between the auditory and visual models improves performance on the task of talking detection. The work here provides dramatic evidence about the efficacy of two very different approaches to multimodal speech detection on a challenging database. 1 Introduction In recent years interest has been building [10, 21, 16, 8, 12] in the problem of detecting locations in the visual field that are responsible for auditory signals. A specialization of this problem is determining whether a person in the visual field is currently taking. Applications of this technology are wide ranging: from speech recognition in noisy environments, to speaker diarization, to expression recognition systems that may benefit from knowledge of whether or not the person is talking to interpret the observed expressions. Past approaches to the problem of speaker detection have focused on exploiting audio-visual synchrony as a measure of how likely a person in the visual field is to have generated the current audio signal [10, 21, 16, 8, 12]. One benefit of these approaches is their general purpose nature, i.e., they are not limited to detecting human speech [12]. Another benefit is that they require very little processing of the visual signal (some of them operating on raw pixel values [10]). However, as we show in this document, when visual features tailored to the analysis of facial expressions are used it is possible to develop a very robust speech detector that is based only on the visual signal that far outperforms the past approaches. Given the strong performance for the visual speech detector we incorporate auditory information using the paradigm of transductive learning. Specifically we use the visual-only detector?s output as 1 an uncertain labeling of when a given person is speaking and then use this labeling along with a set of acoustic measurements to create a voice model of how that person sounds when he/she speaks. We show that the error rate of the visual-only speech detector can be more than halved by combining it with the auditory voice models developed via transductive learning. Another view of our proposed approach is that it is also based on synchrony detection, however, at a much higher level and much longer time scale than previous approaches. More concretely our approach moves from the level of synchrony between pixel fluctuations and sound energy to the level of the visual markers of talking and auditory markers of a particular person?s voice. As we will show later, a benefit of this approach is that the auditory model that is optimized to predict the talking/not-talking visual signal for a particular candidate speaker also works quite well without using any visual input. This is an important property since the visual input is often periodically absent or degraded in real world applications (e.g. when a mobile robot moves to a part of the room where it can no longer see everyone in the room, or when a subject?s mouth is occluded). The results presented here challenge the orthodoxy of the use of low-level synchrony related measures that dominates research in this area. 2 Methods In this section we review a popular approach to speech detection that uses Canonical Correlation Analysis (CCA). Next we present our method for visual-only speaker detection using facial expression dynamics. Finally, we show how to incorporate auditory information using our visual-only model as a training signal. 2.1 Speech Detection by Low-level Synchrony Hershey et. al. [10] pioneered the use of audio-visual synchrony for speech detection. Slaney et. al. [21] presented a thorough evaluation of methods for detecting audio-visual synchrony. Slaney et. al. were chiefly interested in designing a system to automatically synchronize audio and video, however, their results inspired others to use similar approaches for detecting regions in the visual field responsible for auditory events [12]. The general idea is that if measurements in two different sensory modalities are correlated then they are likely to be generated by a single underlying common cause. For example, if mouth pixels of a potential speaker are highly predictable based on sound energy then it is likely that there is a common cause underlying both sensory measurements (i.e. that the candidate speaker is currently talking). A popular apprach to detect correlations between two different signals is Canonical Correlation Analysis. Let A1 , . . . , AN and V1 , . . . , VN be sequences of audio and visual features respectively with each Ai ? Rv and Vi ? Ru . We collectively refer to the audio and visual features with the variables A ? Rv?N and V ? Ru?N . The goal of CCA is to find weight vectors wA ? Rv and wV ? Ru such that the projection of each sequence of sensory measurements onto these weight vectors is maximally correlated. The objective can be stated as follows: (wA , wV ) = argmax ?(A> wA , V > wv ) (1) ||wA ||2 ?1,||wV ||2 ?1 Where ? is the Pearson correlation coefficient. Equation 1 reduces to a generalized Eigenvalue problem (see [9] for more details). Our model of speaker detection based on CCA involves computing canonical vectors wA and wV that solve Equation 1 and then computing time-windowed estimates of the correlation of the auditory and visual features projected on these vectors at each point in time. The final judgment as to whether or not a candidate face is speaking is determined by thresholding the windowed correlation value. 2.2 Visual Detector of Speech The Facial Action Coding System (FACS) is an anatomically inspired, comprehensive and versatile method to describe human facial expressions [7]. FACS encodes the observed expressions as combinations of Action Unit (AUs). Roughly speaking AUs describe changes in the appearance of the face that are due to the effect of individual muscle movements. 2 Figure 1: The Computer Expression Recognition Toolbox was used to automatically extract 84 features describing the observed facial expressions. These features were used for training a speech detector. In recent years significant progress has been made in the full automation of FACS. The Computer Expression Recognition Toolbox (CERT, shown in Figure 1) [2] is a state of the art system for automatic FACS coding from video. The output of the CERT system provides a versatile and effective set of features for vision-based automatic analysis of facial behavior. Among other things it has been successfully used to recognize driver fatigue [22], discriminate genuine from faked pain [13] , and estimate how difficult a student finds a video lecture [24, 23]. In this paper we used 84 outputs of the CERT system ranging from the locations of key feature points on the face to movements of individual facial muscle groups (Action Units) to detectors that specify high-level emotional categories (such as distress). Figure 2 shows an example of the dynamics of CERT outputs during periods of talking and non-talking. There appears to be a periodicity to the modulations in the chin raise Action Unit (AU 17) during the speech period. In order to capture this type of temporal fluctuation we processed the raw CERT outputs with a bank of temporal Gabor filters. Figure 3 shows a subset of the filters we used. The Figure shows the real and imaginary parts of the filter output over a range of bandwidth and fundamental frequency values. In this work we use a total of 25 temporal Gabors. Specifically we use all combinations of half-magnitude bandwidths of 3.4, 6.8, 10.2, 13.6, and 17 Hz peak frequency values of 1, 2, 3, 4, and 5 Hz. The outputs of these filters were used as input to a ridge logistic regression classifer [5]. Logistic regression is a ubiquitous tool for machine learning and has performed quite well over a range of tasks [11]. Popular approaches like Support Vector Machines, and Boosting, can be seen as special cases of logistic regression. One advantage of logistic regression is that it provides estimates of the posterior probability of the category of interest, given the input. In our case, the probability that a sequence of observed images corresponds to a person talking. 2.3 Voice Model The visual speech detector described above was then used to automatically label audio-visual speech signals. These labels where then used to train person-specific voice models. This paradigm for combining weakly labeled data and supervised learning is known as transductive learning in the machine learning community. It is possible to cast the bootstrapping of the voice model very similarly to the more conventional Canonical Correlation method discussed in Section 2.1. Although it is known [20] that non-linear models provide superior performance to linear models for auditory speaker identification, consider the case where we seek to learn a linear model over auditory features to determine a model of a particular speaker?s voice. If we assume that we are given a fixed linear 3 Talking AU 17 (Chin Raise) Not Talking Not Talking 0.2 Time 0.15 Figure 2: An example of the shift in action unit output when talking begins. The Figure shows a bar 0.1 graph where the height of each black line corresponds to the value of Action Unit 17 for a particular frame. Qualitatively there is a periodicity in CERT?s Action Unit 17 (Chin Raise) output during the talking period. 0.05 0 !0.05 A Selection of Temporal Gabors !0.1 Real Filter Amplitude !0.15 !1 0 1 !1 0 1 !1 0 1 !0.2 !1 0 Imaginary 0 1 50!1 0 1 !1 0 1 !1 0 1 !1 0 1 !1 0 1 100 Viewer Confederate 150 Time (seconds) Figure 3: A selection of the temporal Gabor filter bank used to express the modulation of the CERT outputs. Shown are both the real and imaginary Gabor components over a range of bandwidths and peak frequencies. 4 200 model, wV , that predicts when a subject is talking based on visual features we can reformulate the CCA-based approach to learning an auditory model as a simple linear regression problem: wA = argmax ?(A> wA , V > wv ) (2) ||wA ||2 ?1 =   arg min min kA> wa + b ? V > wv k2 wa b (3) Where b is a bias term. While this view is useful for seeing the commonalities between our approach and the classical synchrony approaches it is important to note that our approach does not have the restriction of requiring the use of linear models of either the auditory or visual talking detectors. In this section we show how we can fit a non-linear voice model that is very popular for the task of speaker detection using the visual detector output as a training signal. 2.3.1 Auditory Features We use the popular Mel-Frequency Cesptral Coefficients (MFCCs) [3] as the auditory descriptors to model the voice of a candidate speaker. MFCCs have been applied to a wide range of audio category recognition problems such as genre identification and speaker identification [19], and can be seen as capturing the timbral information of sound. See [14] for a more thorough discussion of the MFCC feature. In other work various statistics of the MFCC features have also been shown to be informative (e.g. first or second temporal derivatives). In this work we only use the raw MFCC outputs leaving a systematic exploration of the acoustic feature space as future work. 2.3.2 Learning and Classification Given a temporal segmentation of when each of a set of candidate speakers is speaking we define the set of MFCC features generated by speaker i as FAi where each column of FAij denotes the MFCC features of speaker i at the j th time point that the speaker is talking. In order to build an auditory model that can discriminate who is speaking we first model the density of input features pi for the ith speaker based on the training data FAi . In order to determine the probability of a speaker generating new input audio features, TA , we apply Bayes? rule p(Si = 1|TA ) ? p(TA |Si = 1)p(Si = 1). Where Si indicates whether or not the ith speaker is currently speaking. The probability distributions of the audio features given whether or not a given speaker is talking are modeled using 4-state hidden Markov models with each state having an independent 4 component Gaussian Mixture model. The transition matrix is unconstrained (i.e. any state may transition to any other). The parameters of the voice model were learned using the Expectation Maximization Algorithm [6]. 2.3.3 Threshold Selection The outputs of the visual detector over time provide an estimate of whether or not a candidate speaker is talking. In this work we convert these outputs into a binary temporal segmentation of when a candidate speaker was or was not talking. In practice we found that the outputs of the CERT system had different baselines for each subject, and thus it was necessary to develop a method for automatically finding person dependent thresholds of the visual detector output in order to accurately segment the areas of where each speaker was or was not talking. Our threshold selection mechanism uses a training portion of audio-visual input as a method of tuning the threshold to each candidate speaker. In order to select an appropriate threshold we trained a number of audio models each trained using a different threshold for the visual speech detector output. Each of these thresholds induces a binary segmentation which in turn is fed to the voice model learning component described in Section 2.3. Next, we evaluate each voice model on a set of testing samples (e.g. those collected after a sufficient amount of time audio-visual input has been collected for a particular candidate speaker). The acoustic model that achieved the highest generalization performance (with respect to the thresholded visual detector?s output on the testing portion) was then selected for fusion with the visual-only model. The reason for this choice is that models trained with less-noisy labels are likely to yield better generalization performance and thus the boundary used to create those labels was 5 Visual Detector Output Training Phase Training Portion Candidate Thresholds 2 MFCCs Training Portion 4 6 8 10 12 10 20 Time Training Segmentation 1 30 40 50 60 70 80 90 Time Audio Model 1 Training Segmentation 2 Audio Model 2 Testing Portion Candidate Thresholds Testing Portion Threshold Selection Mechanism Not Talking Time Talking Segmentation Based on Thresholded Visual Detector Output Testing Portion Testing Portion Not Talking Talking Time Fusion Model Fused Output Auditory Model 2 Output Auditory Model 1 Output Visual Detector Output Testing Phase Segmentation Based on Thresholded Visual Detector Output Figure 4: A schematic of our threshold selection system. In the training stage several models are trained with different temporal segmentations over who is speaking. In the testing stage each of these discrete models is evaluated (in the figure there are only two but in practice we use more) to see how well it generalizes on the testing set (where ground truth is defined based on the visual detector?s thresholded output). Finally, the detector that generalizes the best is fused with the visual detector to give the final output of our system. most likely at the boundary between the two classes. See Figure 4 for a graphical depiction of this approach. Note that at no point in this approach is it necessary to have ground truth values for when a particular person was speaking. All assessments of generalization performance are with respect to the outputs of the visual classifier and not the true speaking vs. not speaking label. 2.4 Fusion There are many approaches [15] to fusing the visual and auditory model outputs to estimate the likelihood that someone is or is not talking. In the current work we employ a very simple fusion scheme that likely could be improved upon in the future. In order to compute the fused output we simply add the whitened outputs of the visual and auditory detectors? outputs. 6 2.5 Related Work Most past approaches for detecting whether someone is talking have either been purely visual [18] (i.e. using a classifier trained on visual features from a training database) or based on audio-visual synchrony [21, 8, 12]. The system most similar to that proposed in this document is due to Noulas and Krose [16]. In their work a switching model is proposed that modifies the audio-visual probability emission distributions based on who is likely speaking. Three principal differences with our work are: Noulas and Krose use a synchrony-based method for initializing the learning of both the voice and visual model (in contrast to our system that uses a robust visual detector for initializing), Noulas and Krose use static visual descriptors (in contrast to our system that uses Gabor energy filters which capture facial expression dynamics), and finally we provide a method for automatic threshold selection to adjust the initial detector?s output to the characteristics of the current speaker. 3 Results We compared the performance of two multi-modal methods for speech detection. The first method used low-level audio-visual synchrony detection to estimate the probability of whether or not someone is speaking at each point in time (see Section 2.1). The second approach is the approach proposed in this document: start with a visual-only speech detector, then incorporate acoustic information by training speaker-dependent voice models, and finally fuse the audio and visual models? outputs. The database we use for training and evaluation is the D006 (aka RUFACS) database [1]. The portion of the database we worked with contains 33 interviews (each approximately 3 minutes in length) between college students and an interrogator who is not visible in the video. The database contains a wide-variety of vocal and facial expression behavior as the responses of the interviewees are not scripted but rather spontaneous. As a consequence this database provides a much more realistic testbed for speech detection algorithms then the highly scripted databases (e.g. the CUAVE database [17]) used to evaluate other approaches. Since we cannot extract visual information of the person behind the camera we define the task of interest to be a binary classification of whether or not the person being interviewed is talking at each point in time. It is reasonable to conclude that our performance would only be improved on the task of speaker detection in two speaker environments if we could see both speakers? faces. The generalization to more than two speakers is untested in this document. We leave the determination of the scalability of our approach to more than two speakers as future work. In order to test the effect of the voice model bootstrapping we use the first half of each interview as a training portion (that is the portion on which the voice model is learned) and the second half as the testing portion. The specific choice of a 50/50 split between training and test is somewhat arbitrary, however, it is a reasonable compromise between spending too long learning the voice model and not having sufficient audio input to fit the voice model. It is important to note that no ground truth was used from the first 50% of each interview as the labeling was the result of the person independent visual speech detector. In total we have 6 interviews that are suitable for evaluation purposes (i.e. we have audio and video information and codes as to when the person in front of the camera is talking). However, we have 27 additional interviews where only video was available. The frames from these videos were used to train the visual-only speech detector. For both our method and the synchrony method the audio modality was summarized by the first 13 (0th through 12th) MFCCs. To evaluate the synchrony-based model we perform the following steps. First we apply CCA between MFCCs and CERT outputs (plus the temporal derivatives and absolute value of the temporal derivatives) over the database of six interviews. Next we look for regions in the interview where the projection of the audio and video onto the vectors found by CCA yield high correlation. To compute this correlation we summarized the correlation at each point in time by computing the correlation over a 5 second window centered at that point. This evaluation method is called ?Windowed Correlation? in the results table for the synchrony detection (see Table 2). We tried several different window lengths and found that the performance was best with 5 seconds. 7 Subject 16 17 49 56 71 94 mean Visual Only 0.9891 0.9444 0.9860 0.9598 0.9800 0.9125 0.9620 Audio Only 0.9796 0.9560 0.9858 0.8924 0.9321 0.8924 0.9397 Fused 0.9929 0.9776 0.9956 0.9593 0.9780 0.9364 0.9733 Visual No Dynamics 0.7894 0.8166 0.8370 0.8795 0.9375 0.7506 0.8351 Table 1: Results of our bootstrapping model for detecting speech. Each row indicates the performance (as measured by area under the ROC) of the a particular detector on the second half of a video of a particular subject. Subject 16 17 49 56 71 94 mean Windowed Correlation .5925 .7937 .6067 .7290 .8078 .6327 .6937 Table 2: The performance of the synchrony detection model. Each row indicates the performance of the a particular detector on the second half of a video of a particular subject. Table 2 and Table 1 summarize the performance of the synchrony detection approach and our approach respectively. Our approach achieves an average area under the ROC of .9733 compared to .6937 for the synchrony approach. Moreover, our approach is able to do considerably better using only vision on the area under the ROC metric (.9620), than the synchrony detection approach that has access to both audio and video. The Gabor temporal filter bank helped to signficantly improve performace, raising it from .8351 to .962 (see Table 1). It is also encouraging that our method was able to learn an accurate audio-only model of the interviewee (average area under the ROC of .9397). This validates that our method is of use in situations where we cannot expect to always have visual input on each of the candidate speakers? faces. Our approach also benefitted from fusing the learned audio-based speaker models. This can be seen by the fact that 2-AFC error (1 - area under the ROC gives the 2-AFC error) for the fused model decreased by an average (geometric mean over each of the six interviews) of 57% over the vision only model. 4 Discussion and Future Work We described a new method for multi-modal detection of when a candidate person is speaking. Our approach used the output of a person independent-vision based speech detector to train a persondependent voice model. To this end we described a novel approach for threshold selection for training the voice model based on the outputs of the visual detector. We showed that our method greatly improved performance with respect to previous approaches to the speech detection problem. We also briefly discussed how the work proposed here can be seen in a similar light as the more conventional synchrony detection methods of the past. This view combined with the large gain in performance for the method presented here demonstrates that synchrony over long time scales and high-level features (e.g. talking / not talking) works significantly better than over short time scales and low-level features (e.g. pixel intensities). In the future, we would like to extend our approach to learn fully online by incorporating approximations to the EM algorithm that are able to run in real-time [4] as well as performing threshold selection on the fly. Another challenge is incorporating confidences from the visual detector output in the learning of the voice model. 8 References [1] M. S. Bartlett, G. Littlewort, C. Lainscsek, I. Fasel, and J. Movellan. Recognition of facial actions in spontaneous expressions,. Journal of Multimedia, 2006. 7 [2] M. S. Bartlett, G. C. Littlewort, M. G. Frank, C. Lainscsek, I. R. Fasel, and J. R. Movellan. Automatic recognition of facial actions in spontaneous expressions. Journal of Multimedia, 1(6):22, 2006. 3 [3] J. Bridle and M. Brown. An experimental automatic word recognition system. JSRU Report, 1003, 1974. 5 [4] A. Declercq and J. Piater. Online learning of gaussian mixture models-a two-level approach. In Intl. l Conf. Comp. Vis., Imaging and Comp. Graph. Theory and Applications, pages 605?611, 2008. 8 [5] A. DeMaris. A tutorial in logistic regression. Journal of Marriage and the Family, pages 956?968, 1995. 3 [6] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, 39(Series B):1?38, 1977. 5 [7] P. Ekman, W. Friesen, and J. Hager. Facial Action Coding System (FACS): Manual and Investigator?s Guide. A Human Face, Salt Lake City, UT, 2002. 2 [8] J. Fisher and T. Darrell. Speaker association with signal-level audiovisual fusion. IEEE Transactions on Multimedia, 6(3):406?413, 2004. 1, 7 [9] D. Hardoon, S. Szedmak, and J. Shawe-Taylor. Canonical correlation analysis: an overview with application to learning methods. Neural Computation, 16(12):2639?2664, 2004. 2 [10] J. Hershey and J. Movellan. Audio-vision: Using audio-visual synchrony to locate sounds. Advances in Neural Information Processing Systems, 12:813?819, 2000. 1, 2 [11] D. Hosmer and S. Lemeshow. Applied logistic regression. Wiley-Interscience, 2000. 3 [12] E. Kidron, Y. Schechner, and M. Elad. Pixels that sound. In IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, volume 1, page 88. Citeseer, 2005. 1, 2, 7 [13] G. Littlewort, M. Bartlett, and K. Lee. Faces of pain: automated measurement of spontaneousallfacial expressions of genuine and posed pain. In Proceedings of the 9th international conference on Multimodal interfaces, pages 15?21. ACM, 2007. 3 [14] B. Logan. Mel frequency cepstral coefficients for music modeling. In International Symposium on Music Information Retrieval, volume 28, 2000. 5 [15] J. Movellan and P. Mineiro. Robust sensor fusion: Analysis and application to audio visual speech recognition. Machine Learning, 32(2):85?100, 1998. 6 [16] A. Noulas and B. Krose. On-line multi-modal speaker diarization. In Proceedings of the 9th international conference on Multimodal interfaces, pages 350?357. ACM, 2007. 1, 7 [17] E. Patterson, S. Gurbuz, Z. Tufekci, and J. Gowdy. CUAVE: A new audio-visual database for multimodal human-computer interface research. In IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, volume 2. Citeseer, 2002. 7 [18] J. Rehg, K. Murphy, and P. Fieguth. Vision-based speaker detection using bayesian networks. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages 110?116, 1999. 7 [19] D. Reynolds. Experimental evaluation of features for robust speaker identification. IEEE Transactions on Speech and Audio Processing, 2(4):639?643, 1994. 5 [20] D. Reynolds, T. Quatieri, and R. Dunn. Speaker verification using adapted Gaussian mixture models. Digital signal processing, 10(1-3):19?41, 2000. 3 [21] M. Slaney and M. Covell. Facesync: A linear operator for measuring synchronization of video facial images and audio tracks. Advances in Neural Information Processing Systems, pages 814?820, 2001. 1, 2, 7 [22] E. Vural, M. Cetin, A. Ercil, G. Littlewort, M. Bartlett, and J. Movellan. Drowsy driver detection through facial movement analysis. Lecture Notes in Computer Science, 4796:6?18, 2007. 3 [23] J. Whitehill, M. Bartlett, and J. Movellan. Automatic facial expression recognition for intelligent tutoring systems. Computer Vision and Pattern Recognition, 2008. 3 [24] J. Whitehill, M. S. Bartlett, and J. R. Movellan. Measuring the difficulty of a lecture using automatic facial expression recognition. In Intelligent Tutoring Systems, 2008. 3 9
3915 |@word briefly:1 seek:1 tried:1 citeseer:2 dramatic:1 versatile:2 hager:1 initial:1 contains:2 efficacy:1 series:1 document:4 reynolds:2 past:4 outperforms:1 imaginary:3 current:3 ka:1 si:4 periodically:1 visible:2 informative:1 realistic:1 v:1 half:5 selected:1 ruvolo:1 ith:2 short:1 provides:4 detecting:6 boosting:1 location:2 height:1 windowed:4 along:1 driver:2 symposium:1 interscience:1 speaks:1 behavior:2 roughly:1 multi:3 mplab:2 inspired:2 audiovisual:1 automatically:4 little:1 encouraging:1 window:2 hardoon:1 begin:1 moreover:2 underlying:2 developed:1 finding:1 bootstrapping:3 temporal:12 thorough:2 k2:1 classifier:2 demonstrates:1 unit:6 drowsy:1 fasel:2 cetin:1 consequence:1 switching:1 fluctuation:2 modulation:2 approximately:1 black:1 plus:1 au:4 challenging:1 someone:4 limited:1 range:4 camera:3 responsible:2 testing:10 practice:2 movellan:9 dunn:1 area:8 gabor:7 significantly:1 projection:2 confidence:1 word:1 vocal:1 seeing:1 performace:1 onto:2 cannot:2 selection:9 operator:1 restriction:1 conventional:2 modifies:1 focused:1 rule:1 rehg:1 diego:2 spontaneous:3 quatieri:1 pioneered:1 us:4 designing:1 gilman:2 recognition:16 predicts:1 database:11 labeled:1 observed:4 fly:1 initializing:2 capture:2 region:2 movement:4 highest:1 predictable:1 environment:2 dempster:1 occluded:1 dynamic:4 trained:5 raise:3 weakly:1 segment:1 compromise:1 classifer:1 upon:1 purely:1 patterson:1 multimodal:4 various:2 genre:1 train:4 describe:2 effective:1 labeling:3 pearson:1 quite:2 elad:1 solve:1 posed:1 littlewort:4 statistic:1 transductive:3 noisy:2 validates:1 final:2 online:2 laird:1 advantage:2 sequence:3 eigenvalue:1 interview:8 interaction:1 combining:2 facesync:1 scalability:1 exploiting:1 darrell:1 intl:1 generating:1 leave:1 develop:2 measured:1 progress:1 strong:1 involves:1 untested:1 filter:8 exploration:1 human:5 centered:1 require:1 generalization:4 viewer:1 around:1 hall:2 ground:3 marriage:1 predict:1 achieves:1 commonality:1 purpose:2 facs:5 label:5 currently:3 create:2 successfully:1 tool:1 city:1 sensor:1 gaussian:3 always:1 rather:1 mobile:2 emission:1 she:1 indicates:3 likelihood:2 aka:1 greatly:1 contrast:2 baseline:1 detect:1 dependent:3 hidden:1 interested:1 pixel:5 arg:1 among:1 classification:2 art:1 special:1 field:4 genuine:2 having:2 look:1 afc:2 future:5 others:1 report:1 intelligent:2 employ:1 recognize:1 comprehensive:1 individual:2 interrogator:1 murphy:1 argmax:2 phase:2 detection:25 interest:3 highly:2 evaluation:5 adjust:1 mixture:3 light:1 behind:1 accurate:3 necessary:2 facial:18 incomplete:1 taylor:1 logan:1 uncertain:1 column:1 modeling:1 measuring:2 maximization:1 fusing:2 subset:1 too:1 front:1 considerably:1 combined:1 person:19 density:1 fundamental:1 peak:2 international:4 systematic:1 lee:1 fused:5 dr:2 slaney:3 conf:1 derivative:3 potential:1 coding:3 student:2 automation:1 coefficient:3 summarized:2 vi:2 later:1 view:3 performed:1 helped:1 portion:12 start:1 bayes:1 synchrony:23 degraded:1 cert:9 descriptor:2 who:4 characteristic:1 judgment:1 identify:1 yield:2 covell:1 raw:3 identification:4 bayesian:1 accurately:1 mfcc:5 comp:2 detector:34 manual:1 energy:3 frequency:5 static:1 bridle:1 gain:1 auditory:21 popular:6 knowledge:1 ut:1 improves:1 ubiquitous:1 segmentation:8 javier:1 amplitude:1 appears:1 higher:1 ta:3 supervised:1 friesen:1 hershey:2 maximally:1 specify:1 improved:3 modal:3 evaluated:1 response:1 hosmer:1 stage:2 correlation:14 marker:2 assessment:1 fai:2 logistic:6 building:1 effect:2 requiring:1 true:1 brown:1 laboratory:2 during:3 speaker:42 mel:2 generalized:1 fatigue:1 chin:3 ridge:1 interface:3 ranging:2 image:2 spending:1 novel:1 common:2 superior:1 overview:1 salt:1 volume:4 discussed:2 he:1 extend:1 association:1 interpret:1 measurement:5 refer:1 significant:1 ai:1 automatic:7 unconstrained:1 tuning:1 similarly:1 distress:1 shawe:1 mfccs:5 had:1 robot:2 access:1 longer:2 operating:1 depiction:1 add:1 halved:1 posterior:1 recent:2 showed:1 jolla:2 wv:8 binary:3 muscle:3 seen:4 additional:1 somewhat:1 determine:2 paradigm:2 period:3 signal:15 rv:3 full:1 sound:6 reduces:1 determination:1 long:2 retrieval:1 a1:1 schematic:1 regression:7 whitened:1 vision:9 expectation:1 metric:1 tailored:1 robotics:1 achieved:1 scripted:2 decreased:1 leaving:1 modality:2 subject:7 hz:2 thing:1 split:1 automated:1 variety:1 fit:2 audio:37 bandwidth:3 idea:1 absent:1 shift:1 whether:10 expression:17 specialization:1 six:2 bartlett:6 speech:29 speaking:14 cause:2 lainscsek:2 action:10 useful:1 amount:1 induces:1 processed:1 category:3 canonical:5 tutorial:1 track:1 discrete:1 express:1 group:2 key:1 threshold:14 thresholded:4 v1:1 imaging:1 graph:2 fuse:1 year:2 convert:1 run:1 family:1 reasonable:2 vn:1 lake:1 capturing:1 cca:6 atkinson:2 adapted:1 worked:1 encodes:1 min:2 performing:1 signficantly:1 combination:2 em:2 anatomically:1 lemeshow:1 equation:2 describing:1 turn:1 mechanism:2 fed:1 end:1 generalizes:2 available:1 apply:2 appropriate:1 alternative:1 voice:23 denotes:1 graphical:1 emotional:1 music:2 build:1 classical:1 society:3 move:2 objective:1 pain:3 calit2:2 mail:2 collected:2 tutoring:2 reason:1 ru:3 code:3 length:2 modeled:1 reformulate:1 difficult:1 frank:1 whitehill:2 stated:1 proper:1 perform:1 markov:1 situation:1 frame:2 locate:1 ucsd:2 arbitrary:1 community:1 intensity:1 cast:1 toolbox:2 optimized:1 raising:1 california:2 acoustic:5 learned:3 testbed:1 able:4 bar:1 perception:2 pattern:3 challenge:2 summarize:1 royal:1 video:12 everyone:1 mouth:2 event:1 suitable:1 difficulty:1 synchronize:1 diarization:3 scheme:2 improve:1 technology:1 created:1 deemed:1 extract:2 piater:1 szedmak:1 review:1 geometric:1 determining:2 synchronization:1 fully:1 lecture:3 expect:1 digital:1 sufficient:2 verification:1 rubin:1 thresholding:1 bank:3 pi:1 row:2 jsru:1 periodicity:2 bias:1 guide:1 wide:3 taking:1 face:7 cepstral:1 absolute:1 benefit:4 boundary:2 world:1 transition:2 sensory:4 concretely:1 made:1 qualitatively:1 san:2 projected:1 far:1 social:1 correlate:1 transaction:2 conclude:1 mineiro:1 table:7 nature:1 learn:3 robust:4 ca:2 timbral:1 paul:2 roc:5 wiley:1 candidate:14 minute:1 specific:2 evidence:1 fusion:7 dominates:1 incorporating:2 magnitude:1 simply:1 likely:7 appearance:1 visual:70 talking:33 fieguth:1 collectively:1 corresponds:2 truth:3 chiefly:1 acm:2 goal:1 room:2 fisher:1 ekman:1 change:1 specifically:2 determined:1 principal:1 total:2 called:1 discriminate:2 multimedia:3 experimental:2 la:2 select:1 college:1 krose:4 support:1 investigator:1 incorporate:3 evaluate:3 benefitted:1 correlated:2
3,219
3,916
Graph-Valued Regression Han Liu Xi Chen John Lafferty Larry Wasserman Carnegie Mellon University Pittsburgh, PA 15213 Abstract Undirected graphical models encode in a graph G the dependency structure of a random vector Y . In many applications, it is of interest to model Y given another random vector X as input. We refer to the problem of estimating the graph G(x) of Y conditioned on X = x as ?graph-valued regression?. In this paper, we propose a semiparametric method for estimating G(x) that builds a tree on the X space just as in CART (classification and regression trees), but at each leaf of the tree estimates a graph. We call the method ?Graph-optimized CART?, or GoCART. We study the theoretical properties of Go-CART using dyadic partitioning trees, establishing oracle inequalities on risk minimization and tree partition consistency. We also demonstrate the application of Go-CART to a meteorological dataset, showing how graph-valued regression can provide a useful tool for analyzing complex data. 1 Introduction Let Y be a p-dimensional random vector with distribution P . A common way to study the structure of P is to construct the undirected graph G = (V, E), where the vertex set V corresponds to the p components of the vector Y . The edge set E is a subset of the pairs of vertices, where an edge between Yj and Yk is absent if and only if Yj is conditionally independent of Yk given all the other variables. Suppose now that Y and X are both random vectors, and let P (? | X) denote the conditional distribution of Y given X. In a typical regression problem, we are interested in the conditional mean ?(x) = E (Y | X = x). But if Y is multivariate, we may be also interested in how the structure of P (? | X) varies as a function of X. In particular, let G(x) be the undirected graph corresponding to P (? | X = x). We refer to the problem of estimating G(x) as graph-valued regression. Let G = {G(x) : x ? X } be a set of graphs indexed by x ? X , where X is the domain of X. Then G induces a partition of X , denoted as X1 , . . . , Xm , where x1 and x2 lie in the same partition element if and only if G(x1 ) = G(x2 ). Graph-valued regression is thus the problem of estimating the partition and estimating the graph within each partition element. We present three different partition-based graph estimators; two that use global optimization, and one based on a greedy splitting procedure. One of the optimization based schemes uses penalized empirical risk minimization, the other uses held-out risk minimization. As we show, both methods enjoy strong theoretical properties under relatively weak assumptions; in particular, we establish oracle inequalities on the excess risk of the estimators, and tree partition consistency (under stronger assumptions) in Section 4. While the optimization based estimates are attractive, they do not scale well computationally when the input dimension is large. An alternative is to adapt the greedy algorithms of classical CART, as we describe in Section 3. In Section 5 we present experimental results on both synthetic data and a meteorological dataset, demonstrating how graph-valued regression can be an effective tool for analyzing high dimensional data with covariates. 1 2 Graph-Valued Regression Let y1 , . . . , yn be a random sample of vectors from P , where each yi ? Rp . We are interested in the case where p is large and, in fact, may diverge with n asymptotically. One way to estimate G from the sample is the graphical lasso or glasso [13, 5, 1], where one assumes that P is Gaussian with mean ? and covariance matrix ?. Missing edges in the graph correspond to zero elements in the precision matrix ? = ??1 [12, 4, 7]. A sparse estimate of ? is obtained by solving    = arg min tr(S?) ? log |?| + ??1 ? (1) ?0  where ? is positive definite, S is the sample covariance matrix, and ?1 = j,k |?jk | is the  elementwise 1 -norm of ?. A fast algorithm for finding ? was given by Friedman et al. [5], which involves estimating a single row (and column) of ? in each iteration by solving a lasso regression.  have been studied by Rothman et al. [10] and Ravikumar et al. [9]. The theoretical properties of ? In practice, it seems that the glasso yields reasonable graph estimators even if Y is not Gaussian; however, proving conditions under which this happens is an open problem. We briefly mention three different strategies for estimating G(x), the graph of Y conditioned on X = x, each of which builds upon the glasso. Parametric Estimators. Assume that Z = (X, Y ) is jointly multivariate Gaussian with covariance ! ?X ?XY matrix ? = . We can estimate ?X , ?Y , and ?XY by their corresponding sample ?Y X ?Y  Y , and ?  XY , and the marginal precision matrix of X, denoted as ?X , can be X, ? quantities ? estimated using the glasso. The conditional distribution of Y given X = x is obtained by standard  Y |X = ? Y ? Gaussian formulas. In particular, the conditional covariance matrix of Y | X is ?      ?Y X ?X ?XY and a sparse estimate of ?Y |X can be obtained by directly plugging ?Y |X into glasso. However, the estimated graph does not vary with different values of X. Kernel Smoothing Estimators. We assume that Y given X is Gaussian, but without making any assumption about the marginal distribution of X. Thus Y | X = x ? N (?(x), ?(x)). Under the assumption that both ?(x) and ?(x) are smooth functions of x, we estimate ?(x) via kernel smoothing:     n n   x ? xi  x ? xi  T  ?(x) = K (x)) (yi ? ? (x)) K (yi ? ? h h i=1 i=1 where K is a kernel (e.g. the probability density function of the standard Gaussian distribution),  ?  is the Euclidean norm, h > 0 is a bandwidth and      n n  x ? xi  x ? xi  K K ? (x) = yi . h h i=1 i=1  Now we apply glasso in (1) with S = ?(x) to obtain an estimate of G(x). This method is appealing because it is simple and very similar to nonparametric regression smoothing; the method was analyzed for one-dimensional X in [14]. However, while it is easy to estimate G(x) at any given x, it requires global smoothness of the mean and covariance functions. Partition Estimators. In this approach, we partition X into finitely many connected regions  j . We then take X1 , . . . , Xm . Within each Xj , we apply the glasso to get an estimated graph G   G(x) = Gj for all x ? Xj . To find the partition, we appeal to the idea used in CART (classification and regression trees) [3]. We take the partition elements to be recursively defined hyperrectangles. As is well-known, we can then represent the partition by a tree, where each leaf node corresponds to a single partition element. In CART, the leaves are associated with the means within each partition element; while in our case, there will be an estimated undirected graph for each leaf node. We refer to this method as Graph-optimized CART, or Go-CART. The remainder of this paper is devoted to the details of this method. 3 Graph-Optimized CART Let X ? Rd and Y ? Rp be two random vectors, and let {(x1 , y1 ), . . . , (xn , yn )} be n i.i.d. samples from the joint distribution of (X, Y ). The domains of X and Y are denoted by X and Y respectively; 2 and for simplicity we take X = [0, 1]d . We assume that Y | X = x ? Np (?(x), ?(x)) where ? : Rd ? Rp is a vector-valued mean function and ? : Rd ? Rp?p is a matrix-valued covariance function. We also assume that for each x, ?(x) = ?(x)?1 is a sparse matrix, i.e., many elements of ?(x) are zero. In addition, ?(x) may also be a sparse function of x, i.e., ?(x) = ?(xR ) for some R ? {1, . . . , d} with cardinality |R|  d. The task of graph-valued regression is to find  to estimate ?(x) for any x ? X ; in some situations the graph of a sparse inverse covariance ?(x) ?(x) is of greater interest than the entries of ?(x) themselves. Go-CART is a partition based conditional graph estimator. We partition X into finitely many con j . We nected regions X1 , . . . , Xm , and within each Xj we apply the glasso to estimate a graph G   j for all x ? Xj . To find the partition, we restrict ourselves to dyadic splits, =G then take G(x) as studied by [11, 2]. The primary reason for such a choice is the computational and theoretical tractability of dyadic partition based estimators. Let T denote the set of dyadic partitioning trees (DPTs) defined over X = [0, 1]d , where each DPT T ? T is constructed by recursively dividing X by means of axis-orthogonal dyadic splits. Each node of a DPT corresponds to a hyperrectangle in [0, 1]d . If a node is associated to the hyper d rectangle A = l=1 [al , bl ], then after being dyadically split along dimension k, the two children (k) k ] ? l>k [al , bl ] and are associated with the sub-hyperrectangles AL = l<k [al , bl ] ? [ak , ak +b 2 (k) (k) AR = A\AL . Given a DPT T , we denote by ?(T ) = {X1 , . . . , XmT } the partition of X induced by the leaf nodes of T . For a dyadic integer N = 2K , we define TN to be the collection of all DPTs such that no partition has a side length smaller than 2?K . Let I(?) denote the indicator function. We denote ?T (x) and ?T (x) as the piecewise constant mean and precision functions associated with T : mT mT   ?Xj ? I (x ? Xj ) and ?T (x) = ?Xj ? I (x ? Xj ) , ?T (x) = j=1 j=1 where ?Xj ? R and ?Xj ? R p p?p are the mean vector and precision matrix for Xj . Before formally defining our graph-valued regression estimators, we require some further definiT tions. Given a DPT T with an induced partition ?(T ) = {Xj }m j=1 and corresponding mean and precision functions ?T (x) and ?T (x), the negative conditional log-likelihood risk R(T, ?T , ?T )  ?T , ?T ) are defined as follows: and its sample version R(T,  m T    E tr ?Xj (Y ? ?Xj )(Y ? ?Xj )T ? log |?Xj | ? I (X ? Xj ) , (2) R(T, ?T , ?T ) = 1  R(T, ?T , ?T ) = n j=1 mT n     tr ?Xj (yi ? ?Xj )(yi ? ?Xj )T ? log |?Xj | ? I (xi ? Xj ) . (3) i=1 j=1  Let [[T ]] > 0 denote a prefix code over all DPTs T ? TN satisfying T ?TN 2?[[T ]] ? 1. One such prefix code [[T ]] is proposed in [11], and takes the form [[T ]] = 3|?(T )| ? 1 + (|?(T )| ? 1) log d/ log 2. A simple upper bound for [[T ]] is [[T ]] ? (3 + log d/ log 2)|?(T )|. (4) Our analysis will assume that the conditional means and precision matrices are bounded in the  ? ? and  ? 1 norms; specifically we suppose there is a positive constant B and a sequence L1,n , . . . , LmT ,n , where each Lj,n ? R+ is a function of the sample size n, and we define the domains of each ?Xj and ?Xj as Mj = {? ? Rp : ?? ? B} ,   ?j = (5) ? ? Rp?p : ? is positive definite, symmetric, and ?1 ? Lj,n . With this notation in place, we can now define two estimators. Definition 1. The penalized empirical risk minimization Go-CART estimator is defined as    mTb    T, ? Xbj , ?Xbj = argminT ?TN ,?X ?Mj ,?X ??j R(T, ?T , ?T ) + pen(T ) j j j=1   is defined in (3) and pen(T ) = ?n ? mT [[T ]] log 2+2 log(np) . where R n 3 Empirically, we may always set the dyadic integer N to be a reasonably large value; the regularization parameter ?n is responsible for selecting a suitable DPT T ? TN . We also formulate an estimator that minimizes held-out risk. Practically, we could split the data into two partitions: D1 = {(x1 , y1 ), . . . , (xn1 , yn1 )} for training and D2 = {((x1 , y1 ), . . . , (xn2 , yn 2 ))} for validation with n1 + n2 = n. The held-out negative log-likelihood risk is then given by out (T, ?T , ?T ) = R mT  n2     1  (6) tr ?Xj (yi ? ?Xj )(yi ? ?Xj )T ? log |?Xj | ? I (xi ? Xj ) . n2 i=1 j=1 Definition 2. For each DPT T define  T = argmin? ? T , ? X j  ?Mj ,?Xj ??j R(T, ?T , ?T )  is defined in (3) but only evaluated on D1 = {(x1 , y1 ), . . . , (xn , yn )}. The held-out risk where R 1 1 minimization Go-CART estimator is out (T, ?  T ). T = argminT ?TN R T , ? out is defined in (6) but only evaluated on D2 . where R The above procedures require us to find an optimal dyadic partitioning tree within TN . Although dynamic programming can be applied, as in [2], the computation does not scale to large input dimensions d. We now propose a simple yet effective greedy algorithm to find an approximate solution  T ). We focus on the held-out risk minimization form as in Definition 2, due to its superior (T, ? T , ? empirical performance. But note that our greedy approach is generic and can easily be adapted to the penalized empirical risk minimization form. First, consider the simple case that we are given a dyadic tree structure T which induces a partition ?(T )={X1 , . . . , XmT } on X . For any partition element Xj , we estimate the sample mean using D1 : n1  1 ? Xj = n1 yi ? I (xi ? Xj ) . i=1 I (xi ? Xj ) i=1  X . More precisely, let ?  X be the The glasso is then used to estimate a sparse precision matrix ? j j sample covariance matrix for the partition element Xj , given by n1   T 1  X = n yi ? ? ? Xj yi ? ? Xj ? I (xi ? Xj ) . j 1 i=1 I (xi ? Xj ) i=1  X is obtained by optimizing ?  X = arg min  The estimator ? j j ?0 {tr(?Xj ?) ? log |?| + ?j ?1 }, where ?j is in one-to-one correspondence with Lj,n in (5). In practice, we run the full regularization path of the glasso, from large ?j , which yields very sparse graph, to small ?j , and select the graph that minimizes the held-out negative log-likelihood risk. To further improve the model selection performance, we refit the parameters of the precision matrix after the graph has been selected. That is, to reduce the bias of the glasso, we first estimate the sparse precision matrix using 1 -regularization, and then we refit the Gaussian model without 1 -regularization, but enforcing the sparsity pattern obtained in the first step. The natural, standard greedy procedure starts from the coarsest partition X = [0, 1]d and then computes the decrease in the held-out risk by dyadically splitting each hyperrectangle A along dimension k ? {1, . . . d}. The dimension k? that results in the largest decrease in held-out risk is selected, where the change in risk is given by (k) out  A) = R out (A, ?  A) ? R out (A(k) , ?  (k) ) ? R out (A(k) , ?  (k) ). ?R (A, ? A , ? A , ?  (k) , ?  (k) , ? L AL AL R AR AR If splitting any dimension k of A leads to an increase in the held-out risk, the element A should no longer be split and hence becomes a partition element of ?(T ). The details and pseudo code are provided in the supplementary materials. This greedy partitioning method parallels the classical algorithms for classification and regression that have been used in statistical learning for decades. However, the strength of the procedures given in Definitions 1 and 2 is that they lend themselves to a theoretical analysis under relatively weak assumptions, as we show in the following section. The theoretical properties of greedy Go-CART are left to future work. 4 4 Theoretical Properties We define the oracle risk R? over TN as R? = R(T ? , ??T , ??T ) = inf T ?TN ,?Xj ?Mj ,?Xj ??j R(T, ?T , ?T ). Note that T ? , ??T ? , and ??T ? might not be unique, since the finest partition always achieves the oracle risk. To obtain oracle inequalities, we make the following two technical assumptions. Assumption 1. Let T ? TN be an arbitrary DPT which induces a partition ?(T ) = {X1 , . . . , XmT } on X , we assume that there exists a constant B, such that max ?Xj ? ? B and max sup log |?| ? Ln 1?j?mT 1?j?mT ???j where ?j is defined in (5) and Ln = max1?j?mT Lj,n , where Lj,n is the same as in (5). We also assume that ? Ln = o( n). Assumption 2. Let Y = (Y1 , . . . , Yp )T ? Rp . For any A ? X , we define Zk (A) = Yk Y ? I(X ? A) ? E(Yk Y ? I(X ? A)) Zj (A) = Yj ? I(X ? A) ? E(Yj ? I(X ? A)). We assume there exist constants M1 , M2 , v1 , and v2 , such that m!M2m?2 v2 m!M1m?2 v1 sup E|Zk (A)|m ? and sup E|Zj (A)|m ? 2 2 j,A k,,A for all m ? 2. Theorem 1. Let T ? TN be a DPT that induces a partition ?(T ) = {X1 , . . . , XmT } on X . For any  b be the estimator obtained using the penalized empirical risk minimiza? ? (0, 1/4), let T, ? Tb , ? T tion Go-CART in Definition 1, with a penalty term pen(T ) of the form  [[T ]] log 2 + 2 log p + log(48/?) pen(T ) = (C1 + 1)Ln mT n ? ? where C1 = 8 v2 + 8B v1 + B 2 . Then for sufficiently large n, the excess risk inequality   ? ?   R(T , ?  b , ? b ) ? R ? inf (R(T, ?T , ?T ) ? R ) 2pen(T ) + inf T T T ?TN ?Xj ?Mj ,?Xj ??j holds with probability at least 1 ? ?. A similar oracle inequality holds when using the held-out risk minimization Go-CART. Theorem 2. Let T ? TN be a DPT which induces a partition ?(T ) = {X1 , . . . , XmT } on X . We define ?n (T ) to be a function of n and T such that  ? [[T ]] log 2 + 2 log p + log(384/?) ?n (T ) = (C2 + 2)Ln mT n ? ? ? where C2 = 8 2v2 + 8B 2v1 + 2B 2 and Ln = max1?j?mT Lj,n . Partition the data into D1 = {(x1 , y1 ), . . . , (xn1 , yn1 )} and D2 = {(x1 , y1 ), . . . , (xn2 , yn 2 )} with sizes n1 = n2 =  b be the estimator constructed using the held-out risk minimization criterion of n/2. Let T, ? Tb , ? T Definition 2. Then, for sufficiently large n, the excess risk inequality   ? ?   R(T , ?  b , ? b ) ? R ? inf inf (R(T, ?T , ?T ) ? R ) + ?n (T) 3?n (T ) + T T T ?TN ?Xj ?Mj ,?Xj ??j with probability at least 1 ? ?. Note that in contrast to the statement in Theorem 1, Theorem 2 results in a stochastic upper bound due to the extra ?n (T) term, which depends on the complexity of the final estimate T. Due to space limitations, the proofs of both theorems are detailed in the supplementary materials. We now temporarily make the strong assumption that the model is correct, so that Y given X is conditionally Gaussian, with a partition structure that is given by a dyadic tree. We show that with high probability, the true dyadic partition structure can be correctly recovered. 5 Assumption 3. The true model is Y | X = x ? Np (??T ? (x), ??T ? (x)) ? T? where T ? TN is a DPT with induced partition ?(T ? ) = {Xj? }m j=1 and mT ? mT ?   ??T ? (x) = ??j I(x ? Xj? ), ??T ? (x) = ??j I(x ? Xj? ). j=1 (7) j=1 Under this assumption, clearly R(T ? , ??T ? , ??T ? ) = inf T ?TN ,?T ,?T ?MT R(T, ?T , ?T ), where MT is given by mT mT     MT = ?(x) = ?Xj I(x ? Xj ), ?(x) = ?Xj I(x ? Xj ) : ?Xj ? Mj , ?Xj ? ?j . j=1 j=1 Let T1 and T2 be two DPTs, if ?(T1 ) can be obtained by further split the hyperrectangles within ?(T2 ), we say ?(T2 ) ? ?(T1 ). We then have the following definitions: Definition 3. A tree estimation procedure T is tree partition consistent in case  P ?(T ? ) ? ?(T) ? 1 as n ? ?. Note that the estimated partition may be finer than the true partition. Establishing a tree partition consistency result requires further technical assumptions. The following assumption specifies that for arbitrary adjacent subregions of the true dyadic partition, either the means or the variances should be sufficiently different. Without such an assumption, of course, it is impossible to detect the boundaries of the true partition. Assumption 4. Let Xi? and Xj? be adjacent partition elements of T ? , so that they have a common parent node within T ? . Let ??X ? = (??X ? )?1 . We assume there exist positive constants c1 , c2 , c3 , c4 , i i such that either   ? ?  ?X ? + ?X ?   i j  2 log   ? log |??Xi? | ? log |??Xj? | ? c4   2 or ??X ? ? ??X ? 22 ? c3 . We also assume i j ?min (??Xj? ) ? c1 , ?j = 1, . . . , mT ? , where ?min (?) denotes the smallest eigenvalue. Furthermore, for any T ? TN and any A ? ?(T ), we have P (X ? A) ? c2 . Theorem 3. Under the above assumptions, we have c1 c2 c3 , c2 c4 } 2 T ?TN , where c1 , c2 , c3 , c4 are defined in Assumption 4. Moreover, the Go-CART estimator in both the penalized risk minimization and held-out risk minimization form is tree partition consistent. inf ?(T ? )?(T ) inf ?T , ?T ?MT R(T, ?T , ?T ) ? R(T ? , ??T ? , ??T ? ) > min{ This result shows that, with high probability, we obtain a finer partition than T ? ; the assumptions do not, however, control the size of the resulting partition. The proof of this result appears in the supplementary material. 5 Experiments We now present the performance of the greedy partitioning algorithm of Section 3 on both synthetic data and a real meteorological dataset. In the experiment, we always set the dyadic integer N = 210 to ensure that we can obtain fine-tuned partitions of the input space X . 5.1 Synthetic Data We generate n data points x1 , . . . , xn ? Rd with n = 10, 000 and d = 10 uniformly distributed on the unit hypercube [0, 1]d . We split the square [0, 1]2 defined by the first two dimension of the unit 6 20 1 2 19 3 18 4 17 1 20 8 14 4 13 5 1 5 4 5 14 10 11 6 18 4 17 X1> 0.5 7 14 13 2 X2< 0.5 9 2 10 12 11 3 4 18 3 17 7 14 30 31 2 3 X2> 0.25 X1> 0.75 28 29 5 6 16 X1< 0.75 7 15 8 14 9 13 (b) 9 12 10 11 3 8 X1< 0.25 4 5 9 10 X2< 0.75 X1> 0.25 6 15 X2> 0.5 7 12 8 14 13 9 X2> 0.125 X1> 0.25 2 3 19 X1< 0.375 X1< 0.25 20 11 3 18 4 6 X2> 0.625 X2> 0.75 X1< 0.875 X2< 0.75 7 15 19 X2< 0.625 8 14 X1> 0.875 13 9 11 10 12 20 1 2 3 19 18 4 17 20 X1< X1> 0.125 0.125 5 6 16 21 X1< X1> 0.125 0.125 22 X2< X2> 0.375 0.375 23 X2< X2> 0.375 0.375 24 X1< X1> 0.625 0.625 25 X1< X1> 0.625 0.625 26 X2< X2> 0.875 0.875 4 18 27 X2< X2> 0.875 0.875 17 5 6 16 7 15 21.3 5 17 16 16 X1> 0.375 2 19 X2> 0.75 X2< 0.5 15 X2< 0.125 10 11 1 15 7 8 14 8 14 13 9 28 10 11 29 30 31 13 14 32 33 34 35 5 37 38 39 17 40 18 41 42 43 13 9 12 1 20 20 2 3 19 1 2 20 19 18 5 6 1 2 18 4 5 17 6 16 7 14 8 9 13 11 10 15 8 14 13 9 12 11 7 6 7 9 13 11 8 14 9 13 12 10 11 10 20 2 20 3 17 6 15 7 8 9 13 12 11 10 10 11 21.2 21.1 21 20.9 2 3 18 5 16 14 1 19 4 18 5 15 8 12 10 19 4 16 6 14 3 18 17 5 15 1 2 19 4 17 16 7 15 1 20 3 19 3 18 4 17 16 12 36 6 Held?out Risk 19 18 17 12 13 4 18 2 16 1 1 20 10 12 11 12 10 19 7 X2< 0.25 8 13 20 11 17 4 15 6 9 12 6 1 32 34 8 13 X2> 0.5 5 16 20 14 6 7 8 14 1 33 35 5 16 15 15 20 19 18 36 37 2 19 10 11 1 3 9 12 1 X1< 0.5 5 20 8 13 9 12 4 40 42 38 39 6 7 15 7 8 13 3 18 16 10 11 3 16 14 2 2 17 6 15 20 19 1 18 9 12 17 16 17 20 19 3 18 17 6 7 15 2 19 41 43 5 16 4 5 17 6 16 15 7 8 20.8 0 5 10 15 20 25 Splitting Sequence No. 14 9 13 12 11 10 (a) (c) Figure 1: Analysis of synthetic data. (a) Estimated dyadic tree structure; (b) Ground true partition. The horizontal axis corresponds to the first dimension denoted as X1 while the vertical axis corresponds to the second dimension denoted by X2 . The bottom left point corresponds to [0, 0] and the upper right point corresponds to [1, 1]. It is also the induced partition on [0, 1]2 . The number labeled on each subregion corresponds to each leaf node ID of the tree in (a); (c) The held-out negative log-likelihood risk for each split. The order of the splits corresponds the ID of the tree node (from small to large). hypercube into 22 subregions as shown in Figure 1 (b). For the t-th subregion where 1 ? t ? 22, we generate an Erd?os-R?enyi random graph Gt = (V t , E t ) with the number of vertices p = 20, the number of edges |E| = 10 and the maximum node degree is four. Based on Gt , we generate the inverse covariance matrix ?t according to ?ti,j = I(i = j) + 0.245 ? I((i, j) ? E t ), where 0.245 guarantees the positive definiteness of ?t when the maximum node degree is 4. For each data point xi in the t-th subregion, we sample a 20-dimensional response vector yi from a multivariate ?1  Gaussian distribution N20 0, ?t . We also create an equally-sized held-out dataset in the same manner based on {?t }22 . t=1 The learned dyadic tree structure and its induced partition are presented in Figure 1. We also provide the estimated graphs for some nodes. We conduct 100 monte-carlo simulations and find that 82 times out of 100 runs our algorithm perfectly recover the ground true partitions on the X1 -X2 plane and never wrongly split any irrelevant dimensions ranging from X3 to X10 . Moreover, the estimated graphs have interesting patterns. Even though the graphs within each subregion are sparse, the estimated graph obtained by pooling all the data together is highly dense. As the greedy algorithm proceeds, the estimated graphs become sparser and sparser. However, for the immediate parent of the leaf nodes, the graphs become denser again. Out of the 82 simulations where we correctly identify the tree structure, we list the graph estimation performance for subregions 28, 29, 13, 14, 5, 6 in terms of precision, recall, and F1-score in Table 1. Table 1: The graph estimation performance over different subregions Mean values over 100 runs (Standard deviation) subregion region 28 region 29 region 13 region 14 region 5 region 6 Precision Recall F1 ? score 0.8327 (0.15) 0.7890 (0.16) 0.7880 (0.11) 0.8429 (0.15) 0.7990 (0.18) 0.7923 (0.12) 0.9853 (0.04) 1.0000 (0.00) 0.9921 (0.02) 0.9821 (0.05) 1.0000 (0.00) 0.9904 (0.03) 0.9906 (0.04) 1.0000 (0.00) 0.9949 (0.02) 0.9899 (0.05) 1.0000 (0.00) 0.9913 (0.02) We see that for a larger subregion (e.g. 13, 14, 5, 6), it is easier to obtain better recovery performance; while good recovery for a very small region (e.g. 28, 29) becomes more challenging. We also plot the held-out risk in the subplot (c). As can be seen, the first few splits lead to the most significant decreases of the held-out risk. The whole risk curve illustrates a diminishing return behavior. Correctly splitting the large rectangle leads to a significant decrease in the risk; in contrast, splitting the middle rectangles does not reduce the risk as much. We also conducted simulations where the true conditional covariance matrix is a continuous function of x; these are presented in the supplementary materials. 7 16 42 44 17 21 18 19 41 43 8 20 6 46 48 51 52 56 58 15 39 40 45 47 5 34 13 31 32 36 33 4 50 14 38 57 65 59 61 CH4 CO TMX H2 27 64 24 26 23 25 WET TMP 63 CLD TMN VAP DTR FRSPRE 37 29 10 55 CO2 GLO 1 35 12 3 49 DIR 66 28 9 60 62 (b) 2 30 11 53 54 7 22 16 42 44 41 43 39 40 21 18 17 6 19 60 62 59 61 66 8 9 20 28 65 46 48 51 52 56 58 24 26 45 47 49 50 55 57 23 25 34 36 33 35 64 27 CO2 DIR CO2 DIR CH4 GLO GLO CO DIR CH4 CO 5 TMX 13 WET WET TMP CLD TMN DTR VAP FRS PRE CLD TMN DTR VAP FRSPRE 38 14 H2 H2 H2 TMP 63 CH4 GLO CO TMX TMX 15 CO2 31 32 1 37 WET TMP TMN 4 12 3 10 29 30 2 CLD VAP DTR FRS 11 53 54 22 7 PRE (c) (a) Figure 2: Analysis of climate data. (a) Learned partitions for the 100 locations and projected to the US map, with the estimated graphs for subregions 3, 10, and 33; (b) Estimated graph with data pooled from all 100 locations; (c) the re-scaled partition pattern induced by the learned dyadic tree structure. 5.2 Climate Data Analysis In this section, we apply Go-CART on a meteorology dataset collected in a similar approach as in [8]. The data contains monthly observations of 15 different meteorological factors from 1990 to 2002. We use the data from 1990 to 1995 as the training data and data from 1996 to 2002 as the held-out validation data. The observations span 100 locations in the US between latitudes 30.475 to 47.975 and longitudes -119.75 to -82.25. The 15 meteorological factors measured for each month include levels of CO2 , CH4 , H2 , CO, average temperature (TMP) and diurnal temperature range (DTR), minimum temperate (TMN), maximum temperature (TMX), precipitation (PRE), vapor (VAP), cloud cover (CLD), wet days (WET), frost days (FRS), global solar radiation (GLO), and direct solar radiation (DIR). As a baseline, we estimate a sparse graph on the data pooled from all 100 locations, using the glasso algorithm; the estimated graph is shown in Figure 2 (b). It is seen that the greenhouse gas factor CO2 is isolated from all the other factors. This apparently contradicts the basic domain knowledge that CO2 should be correlated with the solar radiation factors (including GLO, DIR), according to the IPCC report [6] which is one of the most authoritative reports in the field of meteorology. The reason for the missing edges in the pooled data may be that positive correlations at one location are canceled by negative correlations at other locations. Treating the longitude and latitude of each site as two-dimensional covariate X, and the meteorology data of the p = 15 factors as the response Y , we estimate a dyadic tree structure using the greedy algorithm. The result is a partition with 66 subregions, shown in Figure 2. The graphs for subregions 3 and 10 (corresponding to the coast of California and Arizona states) are shown in subplot (a) of Figure 2. The graphs for these two adjacent subregions are quite similar, suggesting spatial smoothness of the learned graphs. Moreover, for both graphs, CO2 is connected to the solar radiation factor GLO through CH4 . In contrast, for subregion 33, which corresponds to the north part of Arizona, the estimated graph is quite different. In general, it is found that the graphs corresponding to the locations along the coasts are sparser than those corresponding to the locations in the mainland. Such observations, which require validation and interpretation by domain experts, are examples of the capability of graph-valued regression to provide a useful tool for high dimensional data analysis. 8 References [1] O. Banerjee, L. E. Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood estimation. Journal of Machine Learning Research, 9:485?516, March 2008. [2] G. Blanchard, C. Sch?afer, Y. Rozenholc, and K.-R. M?uller. Optimal dyadic decision trees. Mach. Learn., 66(2-3):209?241, 2007. [3] L. Breiman, J. Friedman, C. J. Stone, and R. Olshen. Classification and regression trees. Wadsworth Publishing Co Inc, 1984. [4] D. Edwards. Introduction to graphical modelling. Springer-Verlag Inc, 1995. [5] J. H. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432?441, 2007. [6] IPCC. Climate Change 2007?The Physical Science Basis IPCC Fourth Assessment Report. [7] S. L. Lauritzen. Graphical Models. Oxford University Press, 1996. [8] A. C. Lozano, H. Li, A. Niculescu-Mizil, Y. Liu, C. Perlich, J. Hosking, and N. Abe. Spatialtemporal causal modeling for climate change attribution. In ACM SIGKDD, 2009. [9] P. Ravikumar, M. Wainwright, G. Raskutti, and B. Yu. Model selection in Gaussian graphical models: High-dimensional consistency of 1 -regularized MLE. In Advances in Neural Information Processing Systems 22, Cambridge, MA, 2009. MIT Press. [10] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494?515, 2008. [11] C. Scott and R. Nowak. Minimax-optimal classification with dyadic decision trees. Information Theory, IEEE Transactions on, 52(4):1335?1353, 2006. [12] J. Whittaker. Graphical Models in Applied Multivariate Statistics. Wiley, 1990. [13] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19?35, 2007. [14] S. Zhou, J. Lafferty, and L. Wasserman. Time varying undirected graphs. Machine Learning, 78(4), 2010. 9
3916 |@word version:1 briefly:1 middle:1 stronger:1 norm:3 seems:1 open:1 d2:3 simulation:3 covariance:12 mention:1 tr:5 recursively:2 liu:2 contains:1 score:2 selecting:1 tuned:1 prefix:2 recovered:1 yet:1 finest:1 john:1 partition:53 plot:1 treating:1 greedy:10 leaf:7 selected:2 plane:1 node:12 location:8 along:3 constructed:2 c2:7 become:2 direct:1 yuan:1 manner:1 behavior:1 themselves:2 cardinality:1 becomes:2 provided:1 estimating:7 bounded:1 notation:1 moreover:3 precipitation:1 biostatistics:1 vapor:1 argmin:1 minimizes:2 finding:1 guarantee:1 pseudo:1 ti:1 biometrika:1 scaled:1 partitioning:5 hyperrectangles:3 xmt:5 enjoy:1 yn:5 control:1 unit:2 positive:6 before:1 t1:3 mach:1 ak:2 analyzing:2 id:2 establishing:2 oxford:1 path:1 might:1 studied:2 challenging:1 co:6 range:1 unique:1 responsible:1 yj:4 practice:2 definite:2 x3:1 xr:1 procedure:5 empirical:5 pre:3 tmn:5 get:1 selection:4 wrongly:1 risk:32 impossible:1 map:1 missing:2 go:11 attribution:1 formulate:1 rozenholc:1 simplicity:1 splitting:6 recovery:2 wasserman:2 m2:1 estimator:17 proving:1 dyadically:2 suppose:2 programming:1 us:2 pa:1 element:12 satisfying:1 jk:1 labeled:1 bottom:1 cloud:1 region:9 connected:2 decrease:4 yk:4 complexity:1 covariates:1 co2:8 dynamic:1 solving:2 minimiza:1 upon:1 max1:2 basis:1 easily:1 joint:1 enyi:1 fast:1 describe:1 effective:2 monte:1 hyper:1 quite:2 supplementary:4 valued:12 denser:1 say:1 larger:1 statistic:2 jointly:1 final:1 sequence:2 eigenvalue:1 propose:2 remainder:1 fr:3 greenhouse:1 parent:2 tions:1 radiation:4 measured:1 finitely:2 lauritzen:1 subregion:7 dividing:1 longitude:2 edward:1 involves:1 strong:2 correct:1 stochastic:1 larry:1 material:4 require:3 f1:2 rothman:2 hold:2 practically:1 sufficiently:3 ground:2 tmp:5 ch4:6 vary:1 achieves:1 smallest:1 bickel:1 estimation:7 wet:6 largest:1 create:1 tool:3 minimization:11 uller:1 mit:1 clearly:1 gaussian:11 always:3 zhou:1 breiman:1 varying:1 encode:1 focus:1 modelling:1 likelihood:5 contrast:3 perlich:1 sigkdd:1 baseline:1 detect:1 niculescu:1 lj:6 diminishing:1 interested:3 arg:2 classification:5 canceled:1 denoted:5 smoothing:3 spatial:1 wadsworth:1 marginal:2 field:1 construct:1 never:1 yu:1 future:1 report:3 np:3 t2:3 piecewise:1 few:1 ourselves:1 n1:5 friedman:3 interest:2 highly:1 analyzed:1 devoted:1 held:18 edge:5 nowak:1 xy:4 orthogonal:1 tree:26 indexed:1 euclidean:1 conduct:1 re:1 causal:1 isolated:1 theoretical:7 column:1 modeling:1 ar:3 cover:1 tractability:1 vertex:3 subset:1 entry:1 deviation:1 conducted:1 dependency:1 varies:1 dir:6 synthetic:4 hosking:1 density:1 diverge:1 together:1 again:1 expert:1 return:1 yp:1 li:1 suggesting:1 pooled:3 north:1 blanchard:1 inc:2 depends:1 tion:1 apparently:1 sup:3 start:1 recover:1 parallel:1 capability:1 solar:4 square:1 variance:1 correspond:1 yield:2 identify:1 weak:2 carlo:1 finer:2 mtb:1 diurnal:1 definition:8 associated:4 proof:2 con:1 xn1:2 dataset:5 recall:2 knowledge:1 appears:1 day:2 response:2 erd:1 evaluated:2 though:1 furthermore:1 just:1 correlation:2 horizontal:1 o:1 banerjee:1 assessment:1 meteorological:5 true:8 lozano:1 regularization:4 hence:1 symmetric:1 yn1:2 climate:4 conditionally:2 attractive:1 adjacent:3 criterion:1 stone:1 demonstrate:1 tn:18 l1:1 temperature:3 ranging:1 coast:2 common:2 superior:1 raskutti:1 mt:20 definit:1 empirically:1 physical:1 xbj:2 interpretation:1 m1:1 elementwise:1 mellon:1 refer:3 significant:2 monthly:1 cambridge:1 smoothness:2 rd:4 consistency:4 afer:1 han:1 longer:1 gj:1 gt:2 nected:1 glo:7 multivariate:4 optimizing:1 inf:8 irrelevant:1 verlag:1 inequality:6 yi:12 seen:2 n20:1 greater:1 minimum:1 subplot:2 full:1 x10:1 smooth:1 technical:2 levina:1 adapt:1 lin:1 ravikumar:2 equally:1 tmx:5 plugging:1 mle:1 regression:17 basic:1 iteration:1 kernel:3 represent:1 c1:6 addition:1 semiparametric:1 fine:1 sch:1 extra:1 cart:18 induced:6 pooling:1 undirected:5 lafferty:2 call:1 integer:3 split:11 easy:1 xj:59 hastie:1 bandwidth:1 lasso:3 restrict:1 reduce:2 idea:1 perfectly:1 absent:1 penalty:1 useful:2 detailed:1 nonparametric:1 meteorology:3 subregions:8 induces:5 generate:3 specifies:1 exist:2 zj:2 estimated:14 correctly:3 tibshirani:1 carnegie:1 four:1 demonstrating:1 rectangle:3 v1:4 graph:53 asymptotically:1 run:3 inverse:3 fourth:1 place:1 reasonable:1 electronic:1 decision:2 bound:2 correspondence:1 arizona:2 oracle:6 adapted:1 strength:1 precisely:1 x2:26 min:5 span:1 coarsest:1 relatively:2 according:2 march:1 smaller:1 frost:1 contradicts:1 appealing:1 making:1 happens:1 invariant:1 ghaoui:1 computationally:1 ln:6 apply:4 v2:4 generic:1 vap:5 alternative:1 rp:7 dpt:10 assumes:1 denotes:1 ensure:1 include:1 publishing:1 graphical:8 build:2 establish:1 classical:2 hypercube:2 bl:3 quantity:1 strategy:1 parametric:1 primary:1 collected:1 reason:2 enforcing:1 length:1 code:3 olshen:1 statement:1 dtr:5 negative:5 refit:2 upper:3 vertical:1 observation:3 gas:1 immediate:1 situation:1 defining:1 y1:8 arbitrary:2 abe:1 pair:1 hyperrectangle:2 c3:4 optimized:3 c4:4 california:1 learned:4 proceeds:1 pattern:3 xm:3 scott:1 latitude:2 sparsity:1 cld:5 tb:2 max:2 including:1 lend:1 wainwright:1 suitable:1 natural:1 lmt:1 regularized:1 indicator:1 mizil:1 zhu:1 scheme:1 improve:1 minimax:1 axis:3 aspremont:1 glasso:12 permutation:1 interesting:1 limitation:1 validation:3 h2:5 authoritative:1 degree:2 consistent:2 row:1 course:1 penalized:5 side:1 bias:1 sparse:13 distributed:1 boundary:1 dimension:10 xn:3 curve:1 computes:1 collection:1 projected:1 transaction:1 excess:3 approximate:1 argmint:2 global:3 pittsburgh:1 xi:13 continuous:1 pen:5 decade:1 table:2 mj:7 reasonably:1 zk:2 learn:1 complex:1 domain:5 dense:1 whole:1 n2:6 dyadic:19 child:1 x1:37 site:1 definiteness:1 wiley:1 precision:11 sub:1 lie:1 formula:1 theorem:6 covariate:1 showing:1 appeal:1 list:1 exists:1 conditioned:2 illustrates:1 chen:1 sparser:3 easier:1 spatialtemporal:1 temporarily:1 springer:1 corresponds:10 acm:1 ma:1 whittaker:1 conditional:8 sized:1 month:1 change:3 typical:1 specifically:1 uniformly:1 experimental:1 formally:1 select:1 m2m:1 d1:4 correlated:1
3,220
3,917
Spike timing-dependent plasticity as dynamic filter Joscha T. Schmiedt?, Christian Albers and Klaus Pawelzik Institute for Theoretical Physics University of Bremen Bremen, Germany [email protected], {calbers, pawelzik}@neuro.uni-bremen.de Abstract When stimulated with complex action potential sequences synapses exhibit spike timing-dependent plasticity (STDP) with modulated pre- and postsynaptic contributions to long-term synaptic modifications. In order to investigate the functional consequences of these contribution dynamics (CD) we propose a minimal model formulated in terms of differential equations. We find that our model reproduces data from to recent experimental studies with a small number of biophysically interpretable parameters. The model allows to investigate the susceptibility of STDP to arbitrary time courses of pre- and postsynaptic activities, i.e. its nonlinear filter properties. We demonstrate this for the simple example of small periodic modulations of pre- and postsynaptic firing rates for which our model can be solved. It predicts synaptic strengthening for synchronous rate modulations. Modifications are dominant in the theta frequency range, a result which underlines the well known relevance of theta activities in hippocampus and cortex for learning. We also find emphasis of specific baseline spike rates and suppression for high background rates. The latter suggests a mechanism of network activity regulation inherent in STDP. Furthermore, our novel formulation provides a general framework for investigating the joint dynamics of neuronal activity and the CD of STDP in both spike-based as well as rate-based neuronal network models. 1 Introduction During the past decade the effects of exact spike timing on the change of synaptic connectivity have been studied extensively. In vitro studies have shown that the induction of long-term potentiation (LTP) requires the presynaptic input to a cell to precede the postsynaptic output and vice versa for long-term depression (LTD) (see [1, 2, 3]). This phenomenon has been termed spike timingdependent plasticity (STDP) and emphasizes the importance of a causal order in neuronal signaling. Thereby it extends pure Hebbian learning, which requires only the coincidence of pre- and postsynaptic activity. Consequently, experiments have shown an asymmetric exponential dependence on the timing of spike pairs and a molecular mechanism mostly dependent on the influx of Ca2+ (see [4, 5] for reviews). Further, when induced with more complex spike trains, synaptic modification shows nonlinearities ([6, 7, 8]) indicating the influence of short-term plasticity. Theoretical approaches to STDP cover studies using the asymmetric pair-based STDP window as a lookup table, more biophysical models based on synaptic and neuronal variables, and sophisticated kinetic models (for a review see [9]). Recently, the experimentally observed influence of the postsynaptic membrane potential (e.g. [10]) has also been taken into account ([11]). Our approach is based on differential Hebbian learning ([12, 13]), which generates asymmetric timing windows similar to STDP ([14]) depending on the shape of the back-propagating action ? Postal correspondence should be addressed to Universit?at Bremen, Fachbereich 1, Institut f?ur Theoretische Physik, Abt. Neurophysik, Postfach 330 440, D-28334 Bremen, Germany 1 potential ([15]). We extend it with a mechanism for activating learning by an increase in postsynaptic activity, because both the induction of LTP and LTD require [Ca2+ ] to exceed a threshold ([16]). Moreover, we include a mechanism for adaptive suppression on both synaptic sides, similar to the model in [7]. Finally, we for simplicity assume that both the presynaptic and the postsynaptic side function as low-pass filters; a spike leaves a fast increasing and exponentially decaying trace. Together, we propose a set of differential equations, which captures the contribution dynamics (CD) of pre- and postsynaptic activities to STDP, thereby describing synaptic plasticity as a filter. Our framework reproduces experimental findings from two recent in vitro studies in the visual cortex and the hippocampus in most details. Furthermore, it proves to be particularly suitable for the analysis of the susceptibility of STDP to pre- and postsynaptic rate modulations. This is demonstrated by an analysis of synaptic changes depending on oscillatory modulations of baseline firing rates. 2 Formulation of the model We use a variant of the classical differential Hebbian learning assuming a change of synaptic connectivity w, which is dependent on the presynaptic activity trace ypre and the temporal derivative of the postsynaptic activity trace ypost : w(t) ? = cw ypre (t)y? post (t) . (1) cw denotes a constant learning rate. An illustration of this learning rule for pairs of spikes is given in Figure 1B. For simplicity, we assume these activity traces to be abstract low-pass filtered versions of neuronal activity x in the presynaptic and postsynaptic cells, e.g. the concentration of Ca2+ or the amount of bound glutamate: ypre (t) ?pre ypost (t) y? post (t) = upost (t)z(t) ? xpost (t) ? . ?post y? pre (t) = upre (t) ? xpre (t) ? (2) (3) The dynamics of the y?s are characterized by their respective time constants ?pre and ?post . The contribution of each spike is regulated by a suppressing attenuation factor u pre- and postsynaptically. On the postsynaptical side an additional activation factor z ?enables? the synapse to learn. The dynamics of u and z are discussed below. x represents neuronal activity which can be either a time-continuous firing rate or spike trains given by series of ? pulses X xpre, post (t) = ?(t ? tipre, post ) , (4) i which allows analytical investigations of the properties of our model. Note that formally x(t) has then to be taken as x(t + 0). An illustrating overview over the different parts of the model with sample trajectories is shown in Figure 1A. We define the relative change of synaptic connectivity after after a period T from Equation (1) as Z cw w(t0 + T ) ?1= ypre y? post dt . (5) ?w = w(t0 ) w(t0 ) T The dependence on the initial synaptic strength w(t0 ) as observed in [3, 8] shall not be discussed here, but can easily be achieved by making the learning rate cw in Equation (1) w-dependent. Here, w(t0 ) is chosen to be 1. Ignoring attenuation and activation, a single pair of spikes at temporal distance ?t analytically yields the typical STDP window (see Figure 2A and 3A): ?w(?t) = (  cw 1 ? cw ?  ?pre ??t/?pre ?pre +?post e ?pre ??t/?post ?pre +?post e 2 for ?t > 0 for ?t < 0 (6) A Modulation Factors Activity Activity Traces (Contributions) Differential Hebbian Learning x Low-pass u SYNAPSE ? ?w POST y Low-pass & u x z B y PRE d dt Example for spike pairs ?pre ypre 0 ypost ypost ?w ?  ?w ? ypre (t) y? post (t) dt > 0  ypre (t) y? post (t) dt < 0 0 ?t > 0 ?post Time ?t < 0 Figure 1: Schematic illustration of differential Hebbian learning with contribution dynamics. A: Pre- and postsynaptic activity (x, second column) is modulated (attenuated with u, activated with z, first column) and filtered (y, third column) before it contributes to differential Hebbian learning (w, fourth column). B: Spike pair example for differential Hebbian learning. Left: a presynaptic spike trace (ypre ) preceding a postsynaptic spike trace (ypost , dotted line) yields a synaptic strengthening due to the initially positive postsynaptic contribution (y? post , solid line), which is always stronger than the following negative part. Right: for the reverse timing the positive presynaptic contribution is only multiplied with the negative postsynaptic trace (right). Areas contributing to learning are shaded. The importance of adaptive suppressing mechanisms for synaptic plasticity has experimentally been shown by Froemke and colleagues ([7, 6]). Therefore, we down-regulate the contribution of the spikes to the activity traces y in Equation (2) and (3) with an attenuation factor u on both pre- and postsynaptic sides: 1 (1 ? upre ) ? cpre upre xpre rec ?pre 1 = rec (1 ? upost ) ? cpost (upost ? u0 )xpost ?post u? pre = u? post (7) . (8) This should be understood as an abstract representation of for instance the depletion of transmitters in the presynaptic bouton ([17]) or the frequency-dependent spike attenuation in dendritic spines ([18]), respectively. These recover with their time constants ? rec and are bound between u0 and 1. 3 post For the presynaptic side we assume in the following upre 0 = 0, so we abbreviate u0 = u0 . The constants cpre, post ? [0, 1] denote the impact a spike has on the relaxed synapse. In several experiments it has been shown that a single spike is not sufficient to induce synaptic modification ([10, 8]). Therefore, we introduce a spike-induced postsynaptic activation factor z z? = cact xpost z ? ?(z ? z0 )2 , (9) which enhances the contribution of a postsynaptic spike to the postsynaptic trace, e.g. by the removal of the Mg2+ block from postsynaptic NMDA receptors ([19, 5]). The nonlinear positive feedback is introduced to describe strong enhancing effects as for instance autocatalytic mechanisms, which have been suggested to play a role in learning on several time-scales ([20, 21]). The activation z decays hyperbolically to a lower bound z0 and the contribution of a spike is weighted with the constant cact . 3 Comparison to experiments In order to evaluate our model we implemented experimental stimulation protocols from in vitro studies on synapses of the visual cortex ([7]) and the hippocampus ([8]) of rats. In both studies, simple pairs of spikes and more complex spike trains were artificially elicited in the presynaptic and the postsynaptic cell and the induced change of synaptic connectivity was recorded. Froemke and colleagues ([7]) focused on the effects of spike bursts on synaptic modification in the visual cortex. In addition to the classical STDP pairing protocol ? a presynaptic spike preceding or following a postsynaptic spike after a specific time ?t ? four other experimental protocols (see Figure 2B to E) were performed: (1) 5-5 bursts with five spikes of a certain frequency on both synaptic sides, where the postsynaptic side follows the presynaptic side, (2) presynaptic 100 Hz bursts with n spikes following one postsynaptic spike (post-n-pre), (3) presynaptic 100 Hz bursts with different numbers of spikes followed by one postsynaptic spike (n-pre-post) and (4) a post-pre pair with varying number of following postsynaptic spikes (post-pre-n-post). Experiment Model 1.5 A 1 ?w 0.5 pre post 0 LTP 0.5 0 ?0.5 LTD ?0.5 ?150 ?100 C ?50 ? t (ms) 0 50 1 0 ?0.2 ?0.4 1 2 3 B 4 5 Presynaptic spikes 10 100 D 50 Frequency (Hz) 0.4 E 0.8 0.2 0.6 0 0.4 ?0.2 0.2 ?0.4 0 1 2 3 4 5 Presynaptic spikes 1 2 3 100 4 5 Postsynaptic spikes Figure 2: Differential Hebbian learning with CD reproduces synaptic modification induced with STDP spike patterns in visual cortex. Data taken from [7], personal communication. A: experimental fit and model prediction with Equation (6) of pair-based STDP. B: dependence of synaptic modifications on the frequency of 5-5 bursts with presynaptic spikes following postsynaptic spikes by 6 ms. C, D and E: synaptic modification induced by post-n-pre, n-pre-post and post-pre-n-post 100 Hz spike trains. 4 1 A pre post 0.1 0 0 ?0.5 0.4 ?w 0.3 B 0.2 ?w 0.5 Experiment Model ?150 ?100 ?50 ? t (ms) 0 50 100 (5,89,5) C 0.4 0.3 0.3 0.2 0.2 0 0.1 (5,5) (10,10) (5,20,5) (5,84,5) Interspike interval (ms) (15,5) D 0 (5,15) (5,5) Interspike interval (ms) (10,10) (15,5) (5,15) Interspike interval (ms) Figure 3: Differential Hebbian learning with CD reproduces synaptic modification induced with STDP spike patterns in hippocampus. Data taken from [8] as reported in [22]. A: experimental fit and model prediction with Equation (6) of pair-based STDP. B: quadruplet protocol. C and D: post-pre-post and pre-post-pre triplet protocol for different interspike intervals. Table 1: Parameters and evaluation results for the data sets from visual cortex ([7]) and hippocampus ([8]). E: normalized mean-square error, S: ratio of correctly predicted signs of synaptic modification. cpre cpost cact rec ?pre [s] rec ?post [s] ? u0 z0 E S Visual cortex 0.9 1 1.5 2 0.2 1 0.01 1 4.04 18/18 Hippocampus 0.6 0.4 3.5 0.5 0.5 1 0.7 0.2 2.16 10/11 In the hippocampal study of Wang et al. ([8]) synaptic modification induced by triplets (pre-post-pre and post-pre-post) and quadruplets (pre-post-post-pre and post-pre-pre-post) of spikes was measured while the respective interspike intervals were varied. (see Figure 3B to D). As a first step we took the time constants from the experimentally measured pair-based STDP windows as our low-pass filter time constants (see Equation 6). They remained constant for each data set: (1) ?pre = 13.5 ms and ?post = 42.8 ms for [7], (2) ?pre = 16.8 ms and ?post = 33.7 ms for [8] (taken from [23] since not present in the study). Next, we chose the learning rate cw in Equation (6) to fit the synaptic change for the pairing protocol: (1) cw = 1.56 for the visual cortex data, (2) cw = 0.99 for the hippocampal data set. The remaining parameters were estimated manually within biologically plausible ranges and are shown in Table 1. The model was then applied to the more complex stimulation protocols by solving the differential equations semi-analytically, i.e. separately for every spike and the following interspike interval. As measure for the prediction error of our model we used the normalized mean-square error E E= N 1 X ?wiexp ? ?wimod 2 , N i=1 ?i (10) where ?wiexp and ?wimod are the experimentally measured and the predicted modifications of synaptic strength in the ith experiment; N is the number of data points (N = 18 for the visual cortex data set, N = 11 for the hippocampal data set). ?i is the standard error of the mean of the experimental data. Additionally we counted the number of correctly predicted signs S of synaptic modification, i.e. induced depression or potentiation. The prediction error for both data sets is shown in Table 1. 5 ?/2 ? ?/2 0 0 -?/2 -?/2 -? Phase shift ?? Cortex x0 = 1?Hz ? ?/2 1 3 7 -? 20 50 100 ? x0 = 5?Hz ?/2 0 0 -?/2 -?/2 -? ? ?/2 1 3 7 -? 20 50 100 ? x0 = 10?Hz ?/2 0 0 -?/2 -?/2 -? 1 3 7 -? 20 50 100 Hippocampus x0 = 1?Hz 1 3 7 20 50 100 1 x0 = 5?Hz ?W (a.u.) ? 1 3 7 20 50 100 -1 x0 = 30?Hz 1 0 3 7 20 50 100 Modulation frequency f [Hz] Figure 4: Synaptic change depending on frequency f and phase shift ?? of pre- and postsynaptic rate modulations for different baseline rates x0 . The color codes are identical within each column and in arbitrary units. Note the strong suppression with increasing baseline rate for cortical synapses which is due to strong attenuation effects of pre- and postsynaptic contributions. It is weaker for hippocampal synapses because we found the postsynaptic attenuation to be bounded (u0 = 0.7). 4 Phase, frequency and baseline rate dependence of STDP with contribution dynamics As shown in the previous section our model can reproduce the experimental findings of synaptic weight changes in response to spike sequences surprisingly well and yields better fits than former studies (e.g. [22]). The proposed framework, however, is not restricted to spike sequences but allows to investigate synaptic changes depending on arbitrary pre- and postsynaptic activities. For instance it could be used for investigations of the plasticity effects in simulations with inhomogeneous Poisson processes. Taking x(t) to be firing rates of Poissonian spike trains our account of STDP represents a useful approximation for the expected changes of synaptic strength depending on the time courses of xpre and xpost (compare e.g. [24]). Therefore our model can serve also as building block in rate based network models for investigation of the joint dynamics of neuronal activities and synaptic weights. Here, we demonstrate the benefit of our approach for determining the filter properties of STDP subject to CD, i.e. we use the equations together with the parameters from the experiments for determining the dependency of weight changes on frequency, relative phase ?? and baseline rates of modulated pre- and postsynaptic firing rates. While for substantial modulations of firing rates the nonlinearities are difficult to be treated analytically, for small periodical modulations around a baseline rate x0 the corresponding synaptic changes can be calculated analytically. This is done by considering xpre (t) = x0 + ? cos(2?f t) and xpost (t) = x0 + ? cos(2?f t ? ??) , (11) which for small ? < x0 allows linearization of all equations from which one obtains ?W = ?w/(T ?pre ?post ), where T = 1/f = 2?/? is the period of the respective oscillations. Neglect6 ing transients this finally yields the expected weight changes per unit time. Though lengthy the calculations are straightforward and presented in the supplementary material. We here show only the exact result for the case of constant u = 1 and z = 1: p  ??pre ?post ? 2 (?post ? ?pre )2 + (1 + ? 2 ?pre ?post )2 ?(?post ? ?pre )  ?W = ?sin ??+arctan (12) 2 ? 2 )(1 + ? 2 ? 2 ) 2(1 + ?pre 1 + ? 2 ?pre ?post post The analytical results for the case with CD are shown graphically in Figure 4 using the parameters from cortex and hippocampus, respectively (see Tab. 1). These plots contain the main findings: (1) rate modulations in the theta frequency range (' 7Hz) lead to strongest synaptic changes, (2) also for phase-zero synchronous rate modulations weight changes are positive, (3) in hippocampus maximal weight change magnitudes occur at baseline rates around 5 Hz, and (4) for high baseline rates weight changes become suppressed (? 1/x0 for the hippocampus, ? 1/x20 for the visual cortex). Numerical simulations with finite rate modulations were found to confirm these analytical predictions surprisingly well. Also for the nonlinear regime and Poissionian spike trains deviations remained moderate. 5 Discussion STDP has been proposed to represent a fundamental mechanism underlying learning and many models explored its computational role (examples are [25, 26, 27]). In contrast, research targeting the computational roles of dynamical phenomena inherent in STDP are in the beginning (see [9]). Here, we here formulated a minimal, yet biologically plausible model including the dynamics of how neuronal activity contributes to STDP. We found that our model reproduces the synaptic changes in response to spike sequences in experiments in cortex and hippocampus with high accuracy. Using the corresponding parameters our model predicts weight changes depending on temporal structures in the pre- and postsynaptic activities including spike sequences and varying firing rates. When applied to pre- and postsynaptic rate modulations our approach quantifies synaptic changes depending on frequency and phase shifts between pre- and postsynaptic activities. A rigorous perturbation analysis of our model reveals that the dynamical filter properties of STDP make weight changes sensitively dependent on combinations of specific features of pre- and postsynaptic signals. In particular, our analysis indicates that both cortical as well as hippocampal STDP is most susceptible for modulations in the theta frequency range. It predicts the dependency of synaptic changes on pre- and postsynaptic phase relations of rate modulations. These results are in line with experimental results on the relation of theta rhythms and learning. For instance in hippocampus it is well established that theta oscillations are relevant for learning (for a recent paper see [28]). Furthermore, spike activities in hippocampus exhibit specific phase relations with the theta rhythm (for a review see [29]). Also, it has been found that during learning cortex and hippocampus tend to synchronize with particular phase relations that depend on the novelty of the item to be learned ([30]). The results presented here underline these findings and make testable predictions for the corresponding synaptic changes. Also, we find potentiation for zero phase differences and strong attenuation of weight changes at large baseline rates which is particularly strong for cortical synapses. This finding suggests a mechanism for restricting weight changes with high activity levels and that STDP is de facto switched off when large firing rates are required for the execution of a function as opposed to learning phases; during the latter baseline rates should be rather low, which is particularly relevant in cortex. While for cortical synapses our analysis predicts that very low baseline activities are contributing most to weight changes, in hippocampus synaptic modifications peak at baseline firing rates x0 around 5 Hz, which suggests that x0 can control learning. Our study suggests that the filter properties of STDP originating from the dynamics of pre- and postsynaptic activity contributions are in fact exploited for learning in the brain. In particular, shifts in baseline rates, as well as the frequency and the respective phases of pre- and postsynaptic rate modulations induced by theta oscillations could be tuned to match the values that make STDP most susceptible for synaptic modifications. A fascinating possibility thereby is that these features could be used to control the learning rate which would represent a novel mechanism in addition to other control signals as e.g. neuromodulators. 7 References [1] W. Levy and O. Steward. Temporal contiguity requirements for long-term associative potentiation/depression in the hippocampus. Neuroscience, 8(4):791?797, 1983. [2] H. Markram, J. Lubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science, 1997. [3] G. Q. Bi and M. M. Poo. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience, 18(24):10464?72, 1998. [4] P. J. Sj?ostr?om, E. A. Rancz, A. Roth, and M. H?ausser. Dendritic excitability and synaptic plasticity. Physiological Reviews, 88(2):769?840, 2008. [5] N. Caporale and Y. Dan. Spike timing?dependent plasticity: a Hebbian learning rule. Annual Review in Neuroscience, 2008. [6] R. C. Froemke and Y. Dan. Spike-timing-dependent synaptic modification induced by natural spike trains. Nature, 2002. [7] R. C. Froemke, I. A. Tsay, M. Raad, J. D. Long, and Y. Dan. Contribution of individual spikes in burst-induced long-term synaptic modification. Journal of Neurophysiology, 95(3):1620?9, 2006. [8] H. X. Wang, R. C. Gerkin, D. W. Nauen, and G. Q. Bi. Coactivation and timing-dependent integration of synaptic potentiation and depression. Nature Neuroscience, 8(2):187?93, 2005. [9] A. Morrison, M. Diesmann, and W. Gerstner. Phenomenological models of synaptic plasticity based on spike timing. Biological Cybernetics, 98(6):459?78, 2008. [10] P. J. Sj?ostr?om, G. G. Turrigiano, and S. B. Nelson. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron, 32(6):1149?1164, 2001. [11] C. Clopath, L. B?using, E. Vasilaki, and W. Gerstner. Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nature Neuroscience, 13(3):344?52, 2010. [12] B. Kosco. Differential Hebbian learning. AIP Conference Proceedings 151 on Neural Networks for Computing, 1987. [13] A. H. Klopf. A drive-reinforcement model of single neuron function: An alternative to the Hebbian neuronal model. AIP Conference Proceedings, 151(1):265?270, 1986. [14] P. D. Roberts. Computational consequences of temporally asymmetric learning rules: I. differential Hebbian learning. Journal of Computational Neuroscience, 7(3):235?246, 1999. [15] A. Saudargiene, B. Porr, and F. W?org?otter. How the shape of pre-and postsynaptic signals can influence STDP: a biophysical model. Neural Computation, 2004. [16] T. Nevian and B. Sakmann. Spine Ca2+ signaling in spike-timing-dependent plasticity. Journal of Neuroscience, 26(43):11001?13, 2006. [17] M. V. Tsodyks and H. Markram. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proceedings of the National Academy of Sciences, 94(2):719?723, 1997. [18] E. Tanaka, H. Higashi, and S. Nishi. Membrane properties of guinea pig cingulate cortical neurons in vitro. J Neurophysiol, 65(4):808?821, 1991. [19] L. Nowak, P. Bregestovski, P. Ascher, A. Herbet, and A. Prochiantz. Magnesium gates glutamate-activated channels in mouse central neurones. Nature, 307(5950):462?5, 1984. [20] J. E. Lisman. A Mechanism for Memory Storage Insensitive to Molecular Turnover: A Bistable Autophosphorylating Kinase. Proceedings of the National Academy of Sciences, 82(9):3055? 3057, 1985. [21] U. S. Bhalla and R. Iyengar. Emergent Properties of Networks of Biological Signaling Pathways. Science, 283(5400):381?387, 1999. [22] J. P. Pfister and W. Gerstner. Triplets of spikes in a model of spike timing-dependent plasticity. Journal of Neuroscience, 26(38):9673?82, 2006. [23] G. Bi and M. Poo. Synaptic modification by correlated activity: Hebb?s postulate revisited. Annual Review of Neuroscience, 24:139?66, 2001. 8 [24] M. Tsodyks, K. Pawelzik, and H. Markram. Neural networks with dynamic synapses. Neural Computation, 10(4):821?35, 1998. [25] M. Lengyel, J. Kwag, O. Paulsen, and P. Dayan. Matching storage and recall: hippocampal spike timing-dependent plasticity and phase response curves. Nature Neuroscience, 8(12):1677?83, 2005. [26] F. W?org?otter and B. Porr. Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms. Neural Computation, 17(2):245? 319, 2005. [27] E. M. Izhikevich. Solving the distal reward problem through linkage of STDP and dopamine signaling. Cerebral Cortex, 17(10):2443?52, 2007. [28] U. Rutishauser, I. B. Ross, A. N. Mamelak, and E. M. Schuman. Human memory strength is predicted by theta-frequency phase-locking of single neurons. Nature, 464(7290):903?7, 2010. [29] Y. Yamaguchi, N. Sato, H. Wagatsuma, Z. Wu, C. Molter, and Y. Aota. A unified view of theta-phase coding in the entorhinal-hippocampal system. Current Opinion in Neurobiology, 17(2):197?204, 2007. [30] A. Jeewajee, C. Lever, S. Burton, J. O?Keefe, and N. Burgess. Environmental novelty is signaled by reduction of the hippocampal theta frequency. Hippocampus, 18(4):340?8, 2008. 9
3917 |@word neurophysiology:1 illustrating:1 version:1 cingulate:1 hippocampus:17 stronger:1 underline:2 physik:1 pulse:1 simulation:2 postsynaptically:1 paulsen:1 thereby:3 solid:1 reduction:1 initial:1 series:1 efficacy:1 tuned:1 suppressing:2 past:1 current:1 activation:4 yet:1 numerical:1 plasticity:14 shape:2 christian:1 enables:1 interspike:6 plot:1 interpretable:1 aps:1 leaf:1 item:1 beginning:1 ith:1 short:1 filtered:2 provides:1 postal:1 nishi:1 revisited:1 arctan:1 org:2 five:1 burst:6 differential:13 become:1 pairing:2 dan:3 pathway:1 introduce:1 x0:14 expected:2 spine:2 brain:1 pawelzik:3 window:4 considering:1 increasing:2 moreover:1 bounded:1 underlying:1 contiguity:1 unified:1 finding:5 temporal:5 every:1 attenuation:7 universit:1 facto:1 control:4 unit:2 before:1 positive:4 understood:1 timing:15 consequence:2 receptor:1 firing:9 modulation:16 chose:1 emphasis:1 studied:1 suggests:4 shaded:1 co:2 range:4 bi:3 coactivation:1 block:2 signaling:4 area:1 matching:1 pre:62 induce:1 targeting:1 periodical:1 storage:2 influence:3 demonstrated:1 roth:1 poo:2 straightforward:1 graphically:1 focused:1 simplicity:2 pure:1 rule:3 cultured:1 play:1 exact:2 particularly:3 rec:5 asymmetric:4 predicts:4 observed:2 role:3 coincidence:2 solved:1 capture:1 wang:2 tsodyks:2 higashi:1 burton:1 substantial:1 locking:1 reward:1 turnover:1 dynamic:12 personal:1 depend:1 solving:2 serve:1 neurophysiol:1 easily:1 joint:2 emergent:1 neurotransmitter:1 train:7 fast:1 describe:1 cooperativity:1 klaus:1 supplementary:1 plausible:2 jointly:1 cpost:2 associative:1 sequence:6 biophysical:2 analytical:3 turrigiano:1 took:1 propose:2 maximal:1 strengthening:2 relevant:2 academy:2 mg2:1 requirement:1 depending:7 propagating:1 measured:3 albers:1 strong:5 epsps:1 implemented:1 predicted:4 schuman:1 inhomogeneous:1 filter:8 human:1 transient:1 bistable:1 opinion:1 material:1 require:1 potentiation:5 activating:1 investigation:3 dendritic:2 biological:3 around:3 stdp:31 susceptibility:2 precede:1 ross:1 homeostasis:1 vice:1 weighted:1 reflects:1 iyengar:1 always:1 rather:1 sensitively:1 varying:2 voltage:1 hyperbolically:1 release:1 transmitter:1 indicates:1 contrast:1 rigorous:1 suppression:3 baseline:14 yamaguchi:1 dependent:13 dayan:1 initially:1 relation:5 originating:1 reproduce:1 germany:2 integration:1 frotscher:1 manually:1 identical:1 represents:2 ascher:1 xpost:5 aip:2 inherent:2 abt:1 national:2 individual:1 phase:15 investigate:3 possibility:1 evaluation:1 activated:2 nowak:1 respective:4 institut:1 signaled:1 causal:1 theoretical:2 minimal:2 instance:4 column:5 cover:1 deviation:1 reported:1 dependency:2 periodic:1 fundamental:1 peak:1 physic:1 off:1 together:2 mouse:1 connectivity:5 central:1 recorded:1 neuromodulators:1 opposed:1 postulate:1 lever:1 derivative:1 account:2 potential:3 nonlinearities:2 de:3 lookup:1 coding:2 depends:1 performed:1 view:1 tab:1 decaying:1 recover:1 elicited:1 contribution:15 om:2 square:2 accuracy:1 lubke:1 yield:4 theoretische:1 biophysically:1 emphasizes:1 trajectory:1 drive:1 cybernetics:1 lengyel:1 oscillatory:1 synapsis:7 strongest:1 synaptic:49 lengthy:1 colleague:2 frequency:15 recall:1 color:1 nmda:1 sophisticated:1 back:1 dt:4 response:3 synapse:3 formulation:2 done:1 though:1 furthermore:3 nonlinear:3 izhikevich:1 building:1 effect:5 contain:1 normalized:2 former:1 analytically:4 excitability:1 distal:1 sin:1 during:3 quadruplet:2 rhythm:2 timingdependent:1 rat:1 m:10 hippocampal:9 neocortical:1 demonstrate:2 novel:2 recently:1 functional:1 stimulation:2 vitro:4 overview:1 exponentially:1 insensitive:1 cerebral:1 extend:1 discussed:2 versa:1 gerkin:1 phenomenological:1 cortex:16 tipre:1 dominant:1 recent:3 moderate:1 ausser:1 reverse:1 termed:1 lisman:1 certain:1 steward:1 exploited:1 additional:1 relaxed:1 preceding:2 novelty:2 determine:1 period:2 morrison:1 signal:3 u0:6 semi:1 hebbian:13 ing:1 match:1 characterized:1 calculation:1 long:6 post:50 molecular:2 schematic:1 impact:1 neuro:1 variant:1 prediction:7 enhancing:1 poisson:1 dopamine:1 represent:2 achieved:1 cell:4 background:1 addition:2 separately:1 addressed:1 interval:6 pyramidal:1 induced:11 hz:14 ltp:3 subject:1 tend:1 exceed:1 fit:4 burgess:1 attenuated:1 shift:4 synchronous:2 t0:5 caporale:1 tsay:1 clopath:1 ltd:3 linkage:1 neurones:1 action:2 depression:4 useful:1 amount:1 extensively:1 dotted:1 sign:2 estimated:1 neuroscience:10 correctly:2 per:1 kwag:1 shall:1 bhalla:1 four:1 threshold:1 fourth:1 ca2:4 extends:1 wu:1 oscillation:3 bound:3 followed:1 correspondence:1 fascinating:1 annual:2 activity:26 sato:1 strength:5 occur:1 influx:1 diesmann:1 generates:1 combination:1 membrane:2 postsynaptic:44 ur:1 suppressed:1 modification:19 making:1 biologically:2 restricted:1 taken:5 depletion:1 equation:12 describing:1 mechanism:11 multiplied:1 regulate:1 alternative:1 gate:1 denotes:1 remaining:1 include:1 saudargiene:1 testable:1 prof:1 classical:2 vasilaki:1 spike:62 concentration:1 dependence:5 exhibit:2 regulated:1 enhances:1 cw:9 distance:1 nelson:1 presynaptic:16 induction:2 assuming:1 code:2 illustration:2 ratio:1 regulation:2 mostly:1 difficult:1 x20:1 susceptible:2 robert:1 trace:10 negative:2 sakmann:2 kinase:1 neuron:6 finite:1 neurobiology:1 communication:1 varied:1 perturbation:1 arbitrary:3 introduced:1 pair:11 required:1 learned:1 established:1 tanaka:1 poissonian:1 suggested:1 below:1 pattern:2 dynamical:2 regime:1 pig:1 including:2 memory:2 suitable:1 treated:1 natural:1 synchronize:1 glutamate:2 abbreviate:1 theta:11 temporally:1 review:7 removal:1 contributing:2 relative:2 determining:2 switched:1 rutishauser:1 sufficient:1 bremen:6 cd:7 course:2 surprisingly:2 guinea:1 side:8 weaker:1 ostr:2 institute:1 taking:1 markram:3 magnesium:1 benefit:1 feedback:1 calculated:1 cortical:6 curve:1 porr:2 adaptive:2 reinforcement:1 counted:1 schmiedt:2 sj:2 obtains:1 uni:2 confirm:1 otter:2 reproduces:5 investigating:1 reveals:1 mamelak:1 continuous:1 decade:1 triplet:3 quantifies:1 table:4 stimulated:1 additionally:1 learn:1 nature:6 channel:1 ignoring:1 contributes:2 gerstner:3 complex:4 froemke:4 artificially:1 protocol:7 main:1 neuronal:9 hebb:1 exponential:1 levy:1 third:1 down:1 z0:3 remained:2 specific:4 explored:1 decay:1 physiological:1 restricting:1 keefe:1 importance:2 nauen:1 magnitude:1 linearization:1 execution:1 entorhinal:1 cact:3 visual:9 environmental:1 kinetic:1 formulated:2 consequently:1 change:26 experimentally:4 typical:1 pfister:1 pas:5 experimental:9 klopf:1 indicating:1 formally:1 latter:2 modulated:3 relevance:1 evaluate:1 bouton:1 phenomenon:2 correlated:1
3,221
3,918
Feature Construction for Inverse Reinforcement Learning Zoran Popovi?c University of Washington [email protected] Sergey Levine Stanford University [email protected] Vladlen Koltun Stanford University [email protected] Abstract The goal of inverse reinforcement learning is to find a reward function for a Markov decision process, given example traces from its optimal policy. Current IRL techniques generally rely on user-supplied features that form a concise basis for the reward. We present an algorithm that instead constructs reward features from a large collection of component features, by building logical conjunctions of those component features that are relevant to the example policy. Given example traces, the algorithm returns a reward function as well as the constructed features. The reward function can be used to recover a full, deterministic, stationary policy, and the features can be used to transplant the reward function into any novel environment on which the component features are well defined. 1 Introduction Inverse reinforcement learning aims to find a reward function for a Markov decision process, given only example traces from its optimal policy. IRL solves the general problem of apprenticeship learning, in which the goal is to learn the policy from which the examples were taken. The MDP formalism provides a compact method for specifying a task in terms of a reward function, and IRL further simplifies task specification by requiring only a demonstration of the task being performed. However, current IRL methods generally require not just expert demonstrations, but also a set of features or basis functions that concisely capture the structure of the reward function [1, 7, 9, 10]. Incorporating feature construction into IRL has been recognized as an important problem for some time [1]. It is often easier to enumerate all potentially relevant component features (?components?) than to manually specify a set of features that is both complete and fully relevant. For example, when emulating a human driver, it is easier to list all known aspects of the environment than to construct a complete and fully relevant reward basis. The difficulty of performing IRL given only such components is that many of them may have important logical relationships that make it impossible to represent the reward function as their linear combination, while enumerating all possible relationships is intractable. In our example, some of the components, like the color of the road, may be irrelevant. Others, like the car?s speed and the presence of police, might have an important logical relationship for a driver who prefers to speed. We present an IRL algorithm that constructs reward features out of a large collection of component features, many of which may be irrelevant for the expert?s policy. The Feature construction for Inverse Reinforcement Learning (FIRL) algorithm constructs features as logical conjunctions of the components that are most relevant for the observed examples, thus capturing their logical relationships. At the same time, it finds a reward function for which the optimal policy matches 1 the examples. The reward function can be used to recover a deterministic, stationary policy for the expert, and the features can be used to transplant the reward to any novel environment on which the component features are well defined. In this way, the features act as a portable explanation for the expert?s policy, enabling the expert?s behavior to be predicted in unfamiliar surroundings. 2 Algorithm Overview We define a Markov decision process as M = {S, A, ?, ?, R}, where S is a state space, A is a set of actions, ?sas0 is the probability of a transition from s ? S to s0 ? S under action a ? A, ? ? [0, 1) is a discount factor, and R(s, a) is a reward function. ? ? is the policy P?Thet optimal policy ? that maximizes the expected discounted sum of rewards E [ t=0 ? R(st , at )|? , ?]. FIRL takes as input M \ R, as well as a set of traces from ? ? , denoted by D = {(s1,1 , a1,1 ), ..., (sn,T , an,T )}, where si,t is the tth state in the ith trace. FIRL also accepts a set of component features of the form ? : S ? Z, which are used to construct a set of relevant features for representing R. The algorithm iteratively constructs both the features and the reward function. Each iteration consists of an optimization step and a fitting step. The algorithm begins with an empty feature set ?(0) . The optimization step of the ith iteration computes a reward function R(i) using the current set of features ?(i?1) , and the following fitting step determines a new set of features ?(i) . The objective of the optimization step is to find a reward function R(i) that best fits the last feature hypothesis ?(i?1) while remaining consistent with the examples D. This appears similar to the objective of standard IRL methods. However, prior IRL algorithms generally minimize some measure of deviation from the examples, subject to the constraints of the provided features [1, 7, 8, 9, 10]. In contrast, the FIRL optimization step aims to discover regions where the current features are insufficient, and must be able to step outside of the constraints of the these features. To this end, the reward function R(i) is found by solving a quadratic program, with constraints that keep R(i) consistent with D, and an objective that penalizes the deviation of R(i) from its projection onto the linear basis formed by the features ?(i?1) . The fitting step analyzes the reward function R(i) to generate a new feature hypothesis ?(i) that better captures the variation in the reward function. Intuitively, the regions where R(i) is poorly represented by ?(i?1) correspond to features that must be refined further, while regions where different features take on similar rewards are indicative of redundant features that should be merged. The hypothesis is constructed by building a regression tree on S for R(i) , with the components acting as tests at each node. Each leaf ` contains some subset of S, denoted ?` . The new features are the set of indicator functions for membership in ?` . A simple explanation of the reward function is often more likely to be the correct one [7], so we prefer the smallest tree that produces a sufficiently rich feature set to represent a reward function consistent with the examples. To obtain such a tree, we stop subdividing a node ` when setting the reward for all states in ?` to their average induces an optimal policy consistent with the examples. The constructed features are iteratively improved through the interaction between the optimization and fitting steps. Since the optimization is constrained to be consistent with D, if the current set of features is insufficient to represent a consistent reward function, R(i) will not be well-represented by the features ?(i?1) . This intra-feature reward variance is detected in the fitting step, and the features that were insufficiently refined are subdivided further, while redundant features that have little variance between them are merged. 3 Optimization Step During the ith optimization step, we compute a reward function R(i) using the examples D and the current feature set ?(i?1) . This reward function is chosen so that the optimal policy under the reward is consistent with the examples D and so that it minimizes the sum of squared errors between R(i) and its projection onto the linear basis of features ?(i?1) . Formally, let TR?? be a |?(i?1) | by |S| matrix for which TR?? (?, s) = |?|?1 if s ? ?, and 0 otherwise, and let T??R be a |S| by |?(i?1) | matrix for which T??R (s, ?) = 1 if s ? ?, and 0 otherwise. Thus, T??R TR?? R is a vector where 2 the reward in each state is the average over all rewards in the feature that state belongs to. Letting ? R denote the optimal policy under R, the reward optimization problem can be expressed as: minkR ? T??R TR?? Rk2 R s.t. ? R (s) = a ? (s, a) ? D (1) Unfortunately, the constraint (1) is not convex, making it difficult to solve the optimization efficiently. We can equivalently express it in terms of the value function corresponding to R as X V (s) = R(s, a) + ? ?sas0 V (s0 ) ? (s, a) ? D s0 V (s) = max R(s, a) + ? X a ?sas0 V (s0 ) ?s ? S (2) s0 These constraints are also not convex, but we can construct a convex relaxation by using a pseudovalue function that bounds the value function from above, replacing (2) with the linear constraint X ?sas0 V (s0 ) V (s) ? R(s, a) + ? ?s ? /D s0 In the special case that the MDP transition probabilities ? are deterministic, these constraints are equivalent to the original constraint (1). We prove this by considering the true value function V ? obtained by value iteration, initialized with the pseudo-value function V . Let V 0 be the result obtained by performing one step P of value iteration. Note that V 0 (s) ? V (s) for allPs ? S: since V (s) ? R(s, a) + ? s0 ?sas0 V (s0 ), we must have V (s) ? maxa [R(s, a) + ? s0 ?sas0 V (s0 )] = V 0 (s). Since the MDP is deterministic and the example set D consists of traces from the optimal policy, we have a unique next state for each stateaction pair. Let (si,t , ai,t ) ? D be the tth state-action pair from the ith expert trace. Since the constraints ensure that V (si,t ) = maxa [R(si,t , a) + ?V (si,t+1 )], we have V 0 (si,t ) = V (si,t ) for all i, t, and since V 0 (s) for s ? / D can only decrease, we know that the optimal actions in all si,t must remain the same. Therefore, for each example state si,t , ai,t remains the optimal action under the true value function V ? , and the convex relaxation is equivalent to the original constraint (1). In the case that ? is not deterministic, not all successors of an example state si,t are always observed, and their values under the pseudo-value function may not be sufficiently constrained. However, empirical tests presented in Figure 2(b) suggest that the constraint (1) is rarely violated under the convex relaxation, even in highly non-deterministic MDPs. In practice, we prefer a reward function under which the examples are not just part of an optimal policy, but are part of the unique optimal policy [7]. To prevent rewards under which example actions ?tie? for the optimal choice, we require that ai,t be better than all other actions in state si,t by some margin ?, which we accomplish by adding ? to all inequality constraints for state si,t . The precise value of ? is not important, since changing it only scales the reward function by a constant. All of the constraints in the final optimization are sparse, but the matrix T??R TR?? in the original objective can be arbitrarily dense (if, for instance, there is only one feature which contains all states). Since both T??R and TR?? are sparse, and in fact only contain |S||A| non-zero entries, we can make the optimization fully sparse by introducing a new set of variables R? defined as R? = TR?? R, yielding the sparse objective kR ? T??R R? k2 . Recall that the fitting step must determine not only which features must be refined further, but also which features can be merged. We therefore add a second term to the objective to discourage nearby features from taking on different values when it is unnecessary. To that end, we construct a sparse matrix N , where each row k of N corresponds to a pair of features ?k1 and ?k2 (for a total of K rows). We define N as Nk,?k1 = ?Nk,?k2 = ?(?k1 , ?k2 ), so that [N R? ]k = (R??k1 ? R??k2 )?(?k1 , ?k2 ). The loss factor ?(?k1 , ?k2 ) indicates how much we believe a priori that the features ?k1 and ?k2 should be merged, and is discussed further in Section 4. Since the purpose of the added term is to allow superfluous features to be merged because they take on similar values, we prefer for a feature to be very similar to one of its neighbors, rather than to have minimal distance to all of them. We therefore use a linear rather than quadratic penalty. Since we would like to make nearby features similar so long as it does not adversely impact the primary objective, we give this adjacency penalty a low weight. In our implementation, this weight was set to 3 wN = 10?5 . Normalizing the two objectives by the number of entries, we get the following sparse quadratic program: min R,R? ,V s.t. 1 wN kR ? T??R R? k22 + kN R? k1 |S||A| K R? = TR?? R X ?sas0 V (s0 ) V (s) = R(s, a) + ? ? (s, a) ? D s0 V (s) ? R(s, a) + ? X ?sas0 V (s0 ) + ? ? s ? D, (s, a) ? /D ?sas0 V (s0 ) ?s ? /D s0 V (s) ? R(s, a) + ? X s0 This program can be solved efficiently with any quadratic programming solver. It contains on the order of |S||A| variables and constraints, and the constraint matrix is sparse with O(|S||A|?a ) nonzero entries, where ?a is the average sparsity of ?sa ? that is, the average number of states s0 that have a non-zero probability of being reached from s using action a. In our implementation, we use the cvx Matlab package [6] to solve this optimization efficiently. 4 Fitting Step Once the reward function R(i) for the current feature set ?(i?1) is computed, we formulate a new feature hypothesis ?(i) that is better able to represent this reward function. The objective of this step is to construct a set of features that gives greater resolution in regions where the old features are too coarse, and lower resolution in regions where the old features are unnecessarily fine. We obtain ?(i) by building a regression tree for R(i) over the state-space S, using the standard intra-cluster variance splitting criterion [3]. The tree is rooted at the node t0 , and each node of the tree is defined as tj = {?j , ?j , tj? , tj+ }. tj? and tj+ are the left and right subtrees, ?j ? S is the set of states belonging to node j (initialized as ?0 = S), and ?j is the component feature that acts as the splitting test at node j. States s ? ?j for which ?j (s) = 0 are assigned to the left subtree, and states for which ?j (s) = 1 are assigned to the right subtree. In our implementation, all component features are binary, though the generalization to multivariate components and non-binary trees is straightforward. The new set of features consists of indicators for each of the leaf clusters ?` (where t` is a leaf node), and can be equivalently expressed as a conjunction of components: letting j0 , ..., jn , ` be the sequence of nodes on the path from the root to t` , and defining r0 , ..., rn so that rk is 1 if tjk+1 = tjk + and 0 otherwise, s ? ?` if and only if ?jk (s) = rk for all k ? {0, ..., n}. As discussed in Section 2, we prefer the smallest tree that produces a rich enough feature set to represent a reward function consistent with the examples D. We therefore terminate the splitting procedure at node t` when we detect that further splitting of the node is unnecessary to maintain ? (i) for consistency with the example set. This is done by constructing a new reward function R P (i) ?1 (i) (i) (i) ? ? which R (s, a) = |?` | s??` R (s, a) if s ? ?` , and R (s, a) = R (s, a) otherwise. The (i) ? optimal policy under R is determined with value iteration and, if the policy is consistent with the ? (i) . Although value iteration examples D, t` becomes a leaf and R(i) is updated to be equal to R ordinarily can take many iterations, since the changes we are considering often make small, local changes to the optimal policy compared to the current reward function R(i) , we can often converge in only a few iterations by starting with the value function V (i) for the current reward R(i) . We therefore store this value function and update it along with R(i) . In addition to this stopping criterion, we can also employ the loss factor ?(?k1 , ?k2 ) to encourage the next optimization step to assign similar values to nearby features, allowing them to be merged in subsequent iterations. Recall that ?(?k1 , ?k2 ) is a linear penalty on the difference between the average rewards of states in ?k1 and ?k2 , and can be used to drive the rewards in these features closer together so that they can be merged in a subsequent iteration. Features found deeper in the tree exhibit greater complexity, since they are formed by a conjunction of a larger number of components. These complex features are more likely to be the result of overfitting, and can be merged to form smaller trees. To encourage such mergers, we set ?(?k1 , ?k2 ) to be proportional 4 Gridworld size 16?16 32?32 64?64 128?128 256?256 Total states 256 1024 4096 16384 65536 LPAL (sec) 0.29 0.66 2.22 19.33 52.60 MMP (sec) 0.24 0.42 1.26 7.58 81.26 Abbeel & Ng (sec) 27.05 74.66 272.10 876.18 1339.87 FIRL (sec total) 8.34 29.00 165.29 1208.47 10389.59 Optimization (sec each) 0.39 1.01 4.26 24.44 170.14 Fitting (sec each) 0.11 0.73 5.80 48.44 428.49 Table 1: Performance comparison of FIRL, LPAL, MMP, and Abbeel & Ng on gridworlds of varying size. FIRL ran for 15 iterations. Individual iterations were comparable in length to prior methods. to the depth of the deepest common ancestor of ?k1 and ?k2 . The loss factor is therefore set to ?(?k1 , ?k2 ) = Da (k1 , k2 )/Dt , where Da gives the depth of the deepest common ancestor of two nodes, and Dt is the total depth of the tree. Finally, we found that limiting the depth of the tree and iteratively increasing that limit reduced overfitting and produced features that more accurately described the true reward function, since the optimization and fitting steps could communicate more frequently before committing to a set of complex features. We therefore begin with a depth limit of one, and increase the limit by one on each successive iteration. We experimented with a variety of other depth limiting schemes and found that this simple iterative deepening procedure produced the best results. 5 5.1 Experiments Gridworld In the first experiment, we compare FIRL with the MMP algorithm [9], the LPAL algorithm [10], and the algorithm of Abbeel & Ng [1] on a gridworld modeled after the one used by Abbeel & Ng. The purpose of this experiment is to determine how well FIRL performs on a standard IRL example, without knowledge of the relevant features. A gridworld consists of an N?N grid of states, with five actions possible in each state, corresponding to movement in each of the compass directions and standing in place. In the deterministic gridworld, each action deterministically moves the agent into the corresponding state. In the non-deterministic world, each action has a 30% chance of causing a transition to another random neighboring state. The world is partitioned into 64 equal-sized regions, and all the cells in a single region are assigned the same randomly selected reward. The expert?s policy is the optimal policy under this reward. The example set D is generated by randomly sampling states and following the expert?s policy for 100 steps. Since the prior algorithms do not perform feature construction, they were tested either with indicators for each of the 64 regions (referred to as ?perfect? features), or with indicators for each state (the ?primitive? features). FIRL was instead provided with 2N component features corresponding to splits on the x and y axes, so that ?x,i (sx,y ) = 1 if x ? i, and ?y,i (sx,y ) = 1 if y ? i. By composing such splits, it is possible to represent any rectangular partitioning of the state space. We first compare the running times of the algorithms (using perfect features for prior methods) on gridworlds of varying sizes, shown in Table 1. Performance was tested on an Intel Core i7 2.66 GHz computer. Each trial was repeated 10 times on random gridworlds, with average running times presented. For FIRL, running time is given for 15 iterations, and is also broken down into the average length of each optimization and fitting step. Although FIRL is often slower than methods that do not perform feature construction, the results suggest that it scales gracefully with the size of the problem. The optimization time scales almost linearly, while the tree construction scales worse than linearly but better than quadratically. The latter can likely be improved for large problems by using heuristics to minimize evaluations of the expensive stopping test. In the second experiment, shown in Figure 1, we evaluate accuracy on 64 ? 64 gridworlds with varying numbers of examples, again repeating each trial 10 times. We measured the percentage of states in which each algorithm failed to predict the expert?s optimal action (?percent misprediction?), as well as the Euclidean distance between the expectations of the perfect features under the learned policy and the expert?s policy (normalized by (1 ? ?) as suggested by Abbeel & Ng [1]). For the mixed policies produced by Abbeel & Ng, we computed the metrics for each policy and mixed them using the policy weights ? [1]. For the non-deterministic policies of LPAL, percent misprediction is 5 40% 30% 20% 10% 0% 2 4 8 16 32 64 128 256 512 examples non-deterministic 60% percent misprediction feature expectation dist percent misprediction 50% deterministic 0.2 0.15 0.1 0.05 0 2 4 8 50% 40% 30% 20% 10% 0% 16 32 64 128 256 512 2 4 examples 8 non-deterministic 0.2 feature expectation dist deterministic 60% A&N prim. A&N perf. LPAL perf. MMP prim. MMP perf. FIRL 0.1 0.05 0 16 32 64 128 256 512 LPAL prim. 0.15 2 4 examples 8 16 32 64 128 256 512 examples Figure 1: Accuracy comparison between FIRL, LPAL, MMP, and Abbeel & Ng, the latter provided with either perfect or primitive features. Shaded regions show standard error. Although FIRL was not provided the perfect features, it achieved similar accuracy to prior methods that were. the mean probability of taking an incorrect action in each state. Results for prior methods are shown with both the perfect and primitive features. FIRL again ran for 15 iterations, and generally achieved comparable accuracy to prior algorithms, even when they were provided with perfect features. 5.2 Transfer Between Environments While the gridworld experiments demonstrate that FIRL performs comparably to existing methods on this standard example, even without knowing the correct features, they do not evaluate the two key advantages of FIRL: its ability to construct features from primitive components, and its ability to generalize learned rewards to different environments. To evaluate reward transfer and see how the method performs with more realistic component features, we populated a world with objects. This environment also consists of an N?N grid of states, with the same actions as the gridworld. Objects are randomly placed with 5% probability in each state, and each object has 1 of C ?inner? and ?outer? colors, selected uniformly at random. The algorithm was provided with components of the form ?is the nearest X at most n units away,? where X is a wall or an object with a specific inner or outer color, giving a total of (2C + 1)N component features. The expert received a reward of ?2 for being within 3 units of an object with inner color 1, otherwise a reward of ?1 for being within 2 units of a wall, otherwise a reward of 1 for being within 1 unit of an object with inner color 2, and 0 otherwise. All other colors acted as distractors, allowing us to evaluate the robustness of feature construction to irrelevant components. For each trial, the learned reward tree was used to test accuracy on 10 more random environments, by specifying a reward for each state according to the regression tree. We will refer to these experiments as ?transfer.? Each trial was repeated 10 times. In Figure 2(a), we evaluate how FIRL performs with varying numbers of iterations on both the training and transfer environments, as well as on the gridworld from the previous section. The results indicate that FIRL converged to a stable hypothesis more quickly than in the gridworld, since the square regions in the gridworld required many more partitions than the objectrelative features. However, the required number of iterations was low on both environments. convergence analysis 10% 60% 50% 40% 30% 20% FIRL with objects FIRL transfer 10% FIRL gridworld 0% constraint violation 9% 70% 2 4 6 8 10 12 14 16 18 20 iterations percent violation percent misprediction 80% 8% 7% 6% 5% 4% 3% 2% 1% 0% 0.2 0.4 0.6 0.8 1 ? In Figure 2(b), we evaluate how often the nonFigure 2(a): FIRL con- Figure 2(b): Constraint convex constraints discussed in Section 3 are verged after a small num- violation was low in nonviolated under our convex approximation. We ber of iterations. deterministic MDPs. measure the percent of examples that are violated with varying amounts of non-determinism, by varying the probability ? with which an action moves the agent to the desired state. ? = 1 is deterministic, and ? = 0.2 gives a uniform distribution over neighboring states. The results suggest that the constraint is rarely violated under the convex relaxation, even in highly non-deterministic MDPs, and the number of violations decreases sharply as the MDP becomes more deterministic. We compared FIRL?s accuracy on the transfer task with Abbeel & Ng and MMP. LPAL was not used in the comparison because it does not return a reward function, and therefore cannot transfer 6 40% 30% A&N MMP FIRL A&N transfer 10% MMP transfer FIRL transfer 0% 2 8 14 20 non-deterministic 0.25 0.2 0.15 0.1 0.05 0 2 colors 8 14 50% 40% 30% 20% 10% 0% 20 colors 2 8 14 colors non-deterministic 0.3 60% percent misprediction feature expectation dist percent misprediction 50% 20% deterministic 0.3 feature expectation dist deterministic 60% 20 0.25 0.2 0.15 0.1 0.05 0 2 8 14 20 colors Figure 3: Comparison of FIRL and Abbeel & Ng on training environments and randomly generated transfer environments, with increasing numbers of component features. FIRL maintained higher transfer accuracy in the presence of distractors by constructing features out of relevant components. its policy to new environments. Since prior methods do not perform feature construction, they were provided with all of the component features. The experiments used 64 ? 64 environments and 64 examples. The number of colors C was varied from 2 to 20 to test how well the algorithms handle irrelevant ?distractors.? FIRL ran for 10 iterations on each trial. The results in Figure 3 indicate that accuracy on the training environment remained largely stable, while transfer accuracy gradually decreased with more colors due to the ambiguity caused by large numbers of distractors. Prior algorithms were more affected by distractors on the training environments, and their inability to construct features prevented them from capturing a portable ?explanation? of the expert?s reward. They therefore could not transfer the learned policy to other environments with comparable accuracy. In contrast to the gridworld experiments, the expert?s reward function in these environments was encoded in terms of logical relationships between the component features, which standard IRL algorithms cannot capture. In the next section, we examine another environment that also exemplifies the need for feature construction. 5.3 Highway Driving Behaviors To demonstrate FIRL?s ability to learn meaningful behaviors, we implemented a driving simulator inspired by the environments in [1] and [10]. The task is to navigate a car on a three-lane highway. All other vehicles are moving at speed 1. The agent can drive at speeds 1 through 4, and can move one lane left or one lane right. The other vehicles can be cars or motorcycles, and can be either civilian or police, for a total of 4 possibilities. The component features take the form ?is a vehicle of type X at most n car-lengths in front/behind me,? where X can be either all vehicles, cars, motorcycles, police, or civilian, and n is in the range from 0 to 5 car-lengths. There are equivalent features for checking for cars in front or behind in the lanes to the left and to the right of the agent?s, as well as a feature for each of the four speeds and each lane the agent can occupy. The rich feature set of this driving simulator enables interesting behaviors to be demonstrated. For this experiment, we implemented expert policies for two behaviors: a ?lawful? driver and an ?outlaw? driver. The lawful driver prefers to drive fast, but does not exceed speed 2 in the right lane, or speed 3 in the middle lane. The outlaw driver also prefers to drive fast, but slows down to speed 2 or below when within 2 car-lengths of a police vehicle (to avoid arrest). In Table 2, we compare the policies learned from traces of the two experts by FIRL, MMP, and Abbeel & Ng?s algorithm. As before, prior methods were provided with all of the component features. All algorithms were trained on 30 traces on a stretch of highway 100 car-lengths long, and tested on 10 novel highways. As can be seen in the supplemental videos, the policy learned by FIRL closely matched that of the expert, maintaining a high speed whenever possible but not driving fast in the wrong lane or near police vehicles. The policies learned by Abbeel & Ng?s algorithm and MMP drove at the minimum speed when trained on either the lawful or outlaw expert traces. Because prior methods only represented the reward as a linear combination of the provided features, they were unable to determine the logical connection between speed and the other features. The policies learned by these methods found the nearest ?optimal? position with respect to their learned feature weights, accepting the cost of violating the speed expectation in exchange for best matching the expectation of all other (largely irrelevant) features. FIRL, on the other hand, correctly established 7 ?Lawful? policies percent mis- feature expectprediction ation distance Expert FIRL MMP A&N Random 0.0% 22.9% 27.0% 38.6% 42.7% 0.000 0.025 0.111 0.202 0.220 ?Outlaw? policies average percent mis- feature expectspeed prediction ation distance average speed 2.410 2.314 1.068 1.054 1.053 2.375 2.376 1.056 1.055 1.053 0.0% 24.2% 27.2% 39.3% 41.4% 0.000 0.027 0.096 0.164 0.184 Table 2: Comparison of FIRL, MMP and Abbeel & Ng on the highway environment (left). The policies learned by FIRL closely match the expert?s average speed, while those of other methods do not. The difference between the policies is particularly apparent in the supplemental videos, which can be found at http://graphics.stanford.edu/projects/firl/index.htm the logical connection between speed and police vehicles or lanes, and drove fast when appropriate, as indicated by the average speed in Table 2. As a baseline, the table also shows the performance of a random policy generated by picking weights for the component features uniformly at random. 6 Discussion and Future Work This paper presents an IRL algorithm that constructs reward features, represented as a regression tree, out of a large collection of component features. By combining relevant components into logical conjunctions, the FIRL algorithm is able to discover logical precedence relationships that would not otherwise be apparent. The learned regression tree concisely captures the structure of the reward function and acts as a portable ?explanation? of the observed behavior in terms of the provided components, allowing the learned reward function to be transplanted onto different environments. Feature construction for IRL may be a valuable tool for analyzing the motivations of an agent (such as a human or an animal) from observed behavior. Research indicates that animals learn optimal policies for a pattern of rewards [4], suggesting that it may be possible to learn such behavior with IRL. While it can be difficult to manually construct a complete list of relevant reward features for such an agent, it is comparatively easier to list all aspects of the environment that a human or animal is aware of. With FIRL, such a list can be used to form hypotheses about reward features, possibly leading to increased understanding of the agent?s motivations. In fact, models that perform a variant of IRL have been shown to correspond well to goal inference in humans [2]. While FIRL achieves good performance on discrete MDPs, in its present form it is unable to handle continuous state spaces, since the optimization constraints require an enumeration of all states in S. Approximate linear programming has been used to solve MDPs with continuous state spaces [5], and a similar approach could be used to construct a tractable set of constraints for the optimization step, making it possible to perform feature construction on continuous or extremely large state spaces. Although we found that FIRL converged to a stable hypothesis quickly, it is difficult to provide an accurate convergence test. Theoretical analysis of convergence is complicated by the fact that regression trees provide few guarantees. The conventional training error metric is not a good measure of convergence, because the optimization constraints keep training error consistently low. Instead, we can use cross-validation, or heuristics such as leaf count and tree depth, to estimate convergence. In practice, we found this unnecessary, as FIRL consistently converged in very few iterations. Defining a practical convergence test and analyzing convergence is an interesting avenue for future work. FIRL may also benefit from future work on the fitting step. A more intelligent hypothesis proposal scheme, perhaps with a Bayesian approach, could more readily incorporate priors on potential features to penalize excessively deep trees or prevent improbable conjunctions of components. Furthermore, while regression trees provide a principled method for constructing logical conjunctions of component features, if the desired features are not readily expressible as conjunctions of simple components, other regression methods may be used in the fitting step. For example, the algorithm could be modified to perform feature adaptation by using the fitting step to adapt a set of continuously-parameterized features to best fit the reward function. Acknowledgments. We thank Andrew Y. Ng, Emanuel Todorov, and Sameer Agarwal for helpful feedback and discussion. This work was supported in part by NSF grant CCF-0641402. 8 References [1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In ICML ?04: Proceedings of the 21st International Conference on Machine Learning. ACM, 2004. [2] C. L. Baker, J. B. Tenenbaum, and R. R. Saxe. Goal inference as inverse planning. In Proceedings of the 29th Annual Conference of the Cognitive Science Society, 2007. [3] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth and Brooks, Monterey, CA, 1984. [4] P. Dayan and B. W. Balleine. Reward, motivation, and reinforcement learning. Neuron, 36(2):285?298, 2002. [5] D. P. de Farias and B. Van Roy. The linear programming approach to approximate dynamic programming. Operations Research, 51(6):850?865, 2003. [6] M. Grant and S. Boyd. CVX: Matlab Software for Disciplined Convex Programming (web page and software), 2008. http://stanford.edu/?boyd/cvx. [7] A. Y. Ng and S. J. Russell. Algorithms for inverse reinforcement learning. In ICML ?00: Proceedings of the 17th International Conference on Machine Learning, pages 663?670. Morgan Kaufmann Publishers Inc., 2000. [8] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. In IJCAI?07: Proceedings of the 20th International Joint Conference on Artifical Intelligence, pages 2586?2591. Morgan Kaufmann Publishers Inc., 2007. [9] N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich. Maximum margin planning. In ICML ?06: Proceedings of the 23rd International Conference on Machine Learning, pages 729?736. ACM, 2006. [10] U. Syed, M. Bowling, and R. E. Schapire. Apprenticeship learning using linear programming. In ICML ?08: Proceedings of the 25th International Conference on Machine Learning, pages 1032?1039. ACM, 2008. 9
3918 |@word trial:5 middle:1 concise:1 tr:8 contains:3 existing:1 current:9 si:12 must:6 readily:2 realistic:1 subsequent:2 partition:1 enables:1 update:1 stationary:2 intelligence:1 leaf:5 selected:2 amir:1 indicative:1 merger:1 ith:4 core:1 accepting:1 num:1 provides:1 coarse:1 node:11 successive:1 five:1 along:1 constructed:3 driver:6 koltun:1 incorrect:1 consists:5 prove:1 fitting:13 apprenticeship:3 balleine:1 expected:1 subdividing:1 behavior:8 frequently:1 dist:4 examine:1 simulator:2 planning:2 inspired:1 discounted:1 little:1 enumeration:1 considering:2 solver:1 becomes:2 begin:2 provided:10 discover:2 increasing:2 maximizes:1 misprediction:7 matched:1 project:1 baker:1 minimizes:1 maxa:2 supplemental:2 guarantee:1 pseudo:2 act:3 stateaction:1 tie:1 k2:15 wrong:1 partitioning:1 unit:4 grant:2 before:2 local:1 limit:3 analyzing:2 path:1 might:1 specifying:2 shaded:1 range:1 unique:2 practical:1 acknowledgment:1 practice:2 procedure:2 j0:1 empirical:1 projection:2 matching:1 boyd:2 road:1 suggest:3 get:1 onto:3 cannot:2 impossible:1 equivalent:3 deterministic:21 demonstrated:1 conventional:1 zinkevich:1 straightforward:1 primitive:4 starting:1 convex:9 rectangular:1 formulate:1 resolution:2 splitting:4 handle:2 variation:1 updated:1 limiting:2 construction:11 drove:2 user:1 programming:6 hypothesis:8 roy:1 expensive:1 jk:1 particularly:1 observed:4 levine:1 solved:1 capture:4 region:10 decrease:2 movement:1 russell:1 valuable:1 ran:3 principled:1 environment:22 broken:1 complexity:1 reward:68 dynamic:1 zoran:2 trained:2 solving:1 basis:5 farias:1 htm:1 joint:1 represented:4 committing:1 fast:4 detected:1 outside:1 refined:3 apparent:2 heuristic:2 stanford:6 solve:3 larger:1 encoded:1 otherwise:8 ability:3 final:1 sequence:1 advantage:1 interaction:1 adaptation:1 causing:1 relevant:10 neighboring:2 motorcycle:2 combining:1 poorly:1 verged:1 convergence:7 empty:1 cluster:2 ijcai:1 produce:2 perfect:7 object:7 andrew:1 measured:1 nearest:2 received:1 sa:1 solves:1 implemented:2 c:3 predicted:1 indicate:2 direction:1 merged:8 correct:2 closely:2 human:4 saxe:1 successor:1 adjacency:1 require:3 subdivided:1 exchange:1 assign:1 abbeel:13 generalization:1 wall:2 precedence:1 stretch:1 sufficiently:2 predict:1 tjk:2 driving:4 achieves:1 smallest:2 purpose:2 highway:5 tool:1 always:1 aim:2 modified:1 rather:2 avoid:1 breiman:1 varying:6 conjunction:8 ax:1 exemplifies:1 consistently:2 indicates:2 contrast:2 baseline:1 detect:1 helpful:1 inference:2 dayan:1 stopping:2 membership:1 ancestor:2 expressible:1 classification:1 denoted:2 priori:1 animal:3 constrained:2 special:1 wadsworth:1 equal:2 construct:14 once:1 aware:1 washington:2 ng:15 manually:2 sampling:1 unnecessarily:1 icml:4 future:3 others:1 intelligent:1 few:3 employ:1 surroundings:1 randomly:4 lawful:4 individual:1 maintain:1 friedman:1 highly:2 possibility:1 intra:2 evaluation:1 violation:4 yielding:1 behind:2 superfluous:1 tj:5 subtrees:1 accurate:1 encourage:2 closer:1 improbable:1 tree:22 old:2 euclidean:1 penalizes:1 initialized:2 desired:2 theoretical:1 minimal:1 increased:1 formalism:1 instance:1 civilian:2 compass:1 cost:1 introducing:1 deviation:2 subset:1 entry:3 uniform:1 too:1 front:2 graphic:1 kn:1 accomplish:1 st:2 international:5 standing:1 picking:1 together:1 quickly:2 continuously:1 squared:1 again:2 ambiguity:1 deepening:1 possibly:1 worse:1 adversely:1 cognitive:1 expert:19 leading:1 return:2 suggesting:1 potential:1 de:1 sec:6 inc:2 caused:1 vehicle:7 performed:1 root:1 reached:1 recover:2 complicated:1 lpal:8 minimize:2 square:1 formed:2 accuracy:10 variance:3 who:1 efficiently:3 largely:2 correspond:2 kaufmann:2 generalize:1 bayesian:2 accurately:1 produced:3 comparably:1 drive:4 converged:3 whenever:1 svlevine:1 mi:2 con:1 stop:1 emanuel:1 logical:11 recall:2 color:12 car:9 knowledge:1 distractors:5 appears:1 higher:1 popovi:1 dt:2 violating:1 specify:1 improved:2 disciplined:1 done:1 though:1 furthermore:1 just:2 hand:1 ramachandran:1 web:1 irl:15 replacing:1 gridworlds:4 indicated:1 perhaps:1 mdp:4 believe:1 building:3 k22:1 requiring:1 rk2:1 true:3 contain:1 normalized:1 excessively:1 assigned:3 ccf:1 iteratively:3 nonzero:1 during:1 bowling:1 rooted:1 maintained:1 arrest:1 criterion:2 transplant:2 stone:1 complete:3 demonstrate:2 performs:4 percent:11 novel:3 common:2 overview:1 discussed:3 unfamiliar:1 refer:1 ai:3 rd:1 consistency:1 grid:2 populated:1 moving:1 specification:1 stable:3 add:1 multivariate:1 irrelevant:5 belongs:1 store:1 inequality:1 binary:2 arbitrarily:1 seen:1 analyzes:1 greater:2 minimum:1 morgan:2 r0:1 recognized:1 determine:3 converge:1 redundant:2 full:1 sameer:1 match:2 adapt:1 cross:1 long:2 prevented:1 a1:1 impact:1 prediction:1 variant:1 regression:9 expectation:7 metric:2 iteration:21 sergey:1 represent:6 agarwal:1 achieved:2 cell:1 penalize:1 proposal:1 addition:1 fine:1 decreased:1 publisher:2 subject:1 near:1 presence:2 exceed:1 split:2 enough:1 wn:2 variety:1 todorov:1 fit:2 thet:1 inner:4 simplifies:1 knowing:1 avenue:1 enumerating:1 i7:1 t0:1 penalty:3 prefers:3 action:15 matlab:2 enumerate:1 generally:4 deep:1 monterey:1 amount:1 repeating:1 discount:1 tenenbaum:1 induces:1 tth:2 reduced:1 generate:1 occupy:1 supplied:1 percentage:1 http:2 nsf:1 schapire:1 correctly:1 discrete:1 affected:1 express:1 key:1 four:1 changing:1 prevent:2 relaxation:4 sum:2 sas0:9 inverse:8 package:1 parameterized:1 communicate:1 place:1 almost:1 cvx:3 decision:3 prefer:4 comparable:3 capturing:2 bound:1 quadratic:4 annual:1 insufficiently:1 constraint:22 sharply:1 software:2 lane:9 nearby:3 aspect:2 speed:16 min:1 extremely:1 performing:2 acted:1 according:1 combination:2 vladlen:2 belonging:1 remain:1 smaller:1 partitioned:1 making:2 s1:1 intuitively:1 gradually:1 taken:1 remains:1 count:1 know:1 letting:2 tractable:1 end:2 operation:1 away:1 appropriate:1 robustness:1 slower:1 jn:1 original:3 remaining:1 ensure:1 running:3 maintaining:1 giving:1 k1:15 society:1 comparatively:1 objective:9 move:3 added:1 primary:1 bagnell:1 exhibit:1 distance:4 unable:2 thank:1 outer:2 gracefully:1 me:1 portable:3 length:6 modeled:1 relationship:6 insufficient:2 index:1 demonstration:2 equivalently:2 difficult:3 unfortunately:1 olshen:1 potentially:1 trace:10 slows:1 ordinarily:1 ratliff:1 implementation:3 policy:42 perform:6 allowing:3 neuron:1 markov:3 enabling:1 defining:2 emulating:1 incorporate:1 precise:1 gridworld:12 rn:1 varied:1 police:6 pair:3 required:2 connection:2 concisely:2 accepts:1 quadratically:1 learned:12 established:1 brook:1 able:3 suggested:1 below:1 pattern:1 sparsity:1 program:3 max:1 explanation:4 video:2 ation:2 difficulty:1 rely:1 syed:1 indicator:4 representing:1 scheme:2 mdps:5 perf:3 sn:1 prior:12 understanding:1 deepest:2 checking:1 fully:3 loss:3 mixed:2 interesting:2 proportional:1 validation:1 agent:8 consistent:9 s0:18 row:2 placed:1 last:1 supported:1 allow:1 deeper:1 ber:1 neighbor:1 taking:2 sparse:7 determinism:1 ghz:1 benefit:1 feedback:1 depth:7 van:1 transition:3 world:3 rich:3 computes:1 collection:3 reinforcement:8 approximate:2 compact:1 keep:2 overfitting:2 unnecessary:3 continuous:3 iterative:1 table:6 learn:4 terminate:1 transfer:14 composing:1 ca:1 complex:2 discourage:1 constructing:3 da:2 dense:1 linearly:2 motivation:3 firl:44 repeated:2 referred:1 intel:1 position:1 deterministically:1 mmp:13 rk:2 down:2 remained:1 specific:1 navigate:1 prim:3 list:4 experimented:1 normalizing:1 incorporating:1 intractable:1 adding:1 kr:2 subtree:2 margin:2 nk:2 sx:2 easier:3 likely:3 failed:1 expressed:2 corresponds:1 determines:1 chance:1 acm:3 goal:4 sized:1 change:2 determined:1 uniformly:2 acting:1 total:6 meaningful:1 rarely:2 formally:1 latter:2 inability:1 violated:3 artifical:1 evaluate:6 tested:3
3,222
3,919
Active Instance Sampling via Matrix Partition Yuhong Guo Department of Computer & Information Sciences Temple University Philadelphia, PA 19122 [email protected] Abstract Recently, batch-mode active learning has attracted a lot of attention. In this paper, we propose a novel batch-mode active learning approach that selects a batch of queries in each iteration by maximizing a natural mutual information criterion between the labeled and unlabeled instances. By employing a Gaussian process framework, this mutual information based instance selection problem can be formulated as a matrix partition problem. Although matrix partition is an NP-hard combinatorial optimization problem, we show that a good local solution can be obtained by exploiting an effective local optimization technique on a relaxed continuous optimization problem. The proposed active learning approach is independent of employed classification models. Our empirical studies show this approach can achieve comparable or superior performance to discriminative batch-mode active learning methods. 1 Introduction Active learning is well-motivated in many supervised learning scenarios where unlabeled instances are abundant and easy to retrieve but labels are difficult, time-consuming, or expensive to obtain. For example, it is easy to gather large amounts of unlabeled documents or images from the Internet, whereas labeling them requires manual effort from experienced human annotators. Randomly selecting unlabeled instances for labeling is inefficient in many situations, since non-informative or redundant instances might be selected. Aiming to reduce labeling effort, active learning (i.e., selective sampling) methods have been adopted to control the labeling process in many areas of machine learning. Given a large pool of unlabeled instances, active learning provides a way to iteratively select the most informative unlabeled instances?the queries?from the pool to label. Many researchers have addressed the active learning problem in various ways [13]. Most have focused on selecting a single most informative unlabeled instance to query each time. The ultimate goal for most such approaches is to select instances that could lead to a classifier with low generalization error. Towards this, a few variants of a mutual information criterion have been employed in the literature to guide the active instance sampling process. The approaches in [4][10] select the instance to maximize the increase of mutual information and the mutual information, respectively, between the selected set of instances and the remainder based on Gaussian process models. The approach proposed in [5] seeks the instance whose optimistic label provides maximum mutual information about the labels of the remaining unlabeled instances. The mutual information measure used is discriminative, computed using their trained classifier at that point. This approach implicitly exploits the clustering information contained in the unlabeled data in an optimistic way. The single instance selection active learning methods require tedious retraining with each single instance being labeled. When the learning task is sufficiently complex, the retraining process between queries can become very slow. This may make highly interactive learning inefficient or impractical. Furthermore, if a parallel labeling system is available, e.g., multiple annotators working on 1 different labeling workstations at the same time on a network, a single instance selection system can make wasteful use of the resource. Thus, a batch-mode active learning strategy that selects multiple instances each time is more appropriate under these circumstances. The challenge in batchmode active learning is how to properly assemble the optimal query batch. Simply using a single instance selection strategy to select a batch of queries in each iteration does not work well, since it fails to take the information overlap between the multiple instances into account. Principles for batch mode active learning need to be developed to address the multi-instance selection specifically. Several sophisticated batch-mode active learning methods have been proposed for classification. Most of these approaches use greedy heuristics to ensure the overall informativeness of the batch by taking both the individual informativeness and the diversity of the selected instances into account. Schohn and Cohn [12] select instances according to their proximity to the dividing hyperplane for a linear SVM. Brinker [2] considers an approach for SVMs that explicitly takes the diversity of the selected instances into account, in addition to individual informativeness. Xu et al. [14] propose a representative sampling approach for SVM active learning, which also incorporates a diversity measure. Specifically, they query cluster centroids for instances that lie close to the decision boundary. Hoi et al. [7, 8] extend the Fisher information framework to the batch-mode setting for binary logistic regression. Hoi et al. [9] propose a novel batch-mode active learning scheme on SVMs that exploits semi-supervised kernel learning. In particular, a kernel function is first learned from a mixture of labeled and unlabeled examples, and then is used to effectively identify the informative and diverse instances via a min-max framework. Instead of using heuristic measures, Guo and Schuurmans [6] treat batch construction for logistic regression as a discriminative optimization problem, and attempt to construct the most informative batch directly. Overall, these batch-mode active learning approaches all make batch selection decisions directly based on the classifiers employed. In this paper, we propose a novel batch-mode active learning approach that makes query selection decisions independent of the classification model employed. The idea is to select a batch of queries in each iteration by maximizing a general mutual information measure between the labeled instances and the unlabeled instances. By employing a Gaussian process framework, this mutual information maximization problem can be further formulated as a matrix partition problem. Although the matrix partition problem is an NP-hard combinatorial optimization, it can first be relaxed into a continuous optimization problem and then a good local solution can be obtained by exploiting an effective local optimization. The local optimization method we use is developed by combining a local linearization of the objective function based on its first-order Taylor series expansion, and a straightforward backtracking line search. Unlike most active learning methods studied in the literature, our query selection method does not require knowledge of the employed classifier. Our empirical studies show that the proposed batch-mode active learning approach can achieve superior or comparable performance to discriminative batch-mode active learning methods that have been optimized on specific classifiers. The remainder of the paper is organized as follows. Section 2 provides preliminaries on Gaussian processes. Section 3 introduces the proposed matrix partition approach for batch-mode active learning. Empirical studies are presented in Section 4, and Section 5 concludes this work. 2 Gaussian Processes A Gaussian process is a generalization of the Gaussian probability distribution. Although Gaussian processes have a long history in statistics, their potential has only become widely appreciated in the machine learning community during the past decade [11]. In this section, we provide an overview of Gaussian processes and some of their important properties which we will exploit later to construct our active learning approach. 2.1 Multivariate Gaussian Distribution The Gaussian, also known as the normal distribution, is a widely used model for the distribution of continuous variables. In the case of multiple random variables, the joint multivariate Gaussian distribution for a d ? 1 vector x is given in the form   1 1 > ?1 P (x) = exp ? (x ? ?) ? (x ? ?) 2 (2?)d/2 |?|1/2 2 where ? is a d-dimensional mean vector, ? is a d ? d covariance matrix, and |?| denotes the determinant of ?. When d = 1, we obtain the standard one-variable Gaussian distribution. 2.2 Gaussian Processes A Gaussian process is a generalization of a multivariate Gaussian distribution over a finite vector > > space to a function space of infinite dimension. Given a set of instances X = [x> 1 ; x2 ; ? ? ? ; xt ], a data modeling function f (?) can be viewed as a single sample from a Gaussian distribution with a mean function ?(?), and a covariance function C(?, ?). In particular, ?(xi ) denotes the mean of the function variable f (xi ) at point xi , and C(xi , xj ) expresses the expected covariance between functions f at point xi and xj . A Gaussian process is defined as a Gaussian distribution on a space of functions f which can be written in the form   1 1 P (f (x)) = exp ? (f (x)??(x))> ??1 (f (x)??(x)) Z 2 where ?(x) is the mean function, ? is defined using the covariance function C, and Z denotes the normalization factor. One typical choice for the covariance function C is a symmetric positivedefinite kernel function K, e.g. a Gaussian kernel   (kxi ? xj k2 K(xi , xj ) = exp ? (1) ?2 One important property of Gaussian processes is that for every finite set (or subset) of instances XQ with indices Q, the joint distribution over the corresponding random function variables fQ = f (XQ ) is a multivariate Gaussian distribution with a mean vector ?Q = ?(XQ ) and a covariance matrix ?QQ , where each entry ?i,j is defined using the covariance kernel function K(xi , xj )   1 1 P (fQ ) = exp ? (fQ ??Q )> ??1 (f ?? ) (2) Q QQ Q Z 2 Here Z = (2?)q/2 |?QQ |1/2 , and q is the size of set Q. We can assume the the mean function ?(?) = 0. Nevertheless, it is irrelevant in this paper. 3 Batch-mode Active Learning via Matrix Partition Given a small set of labeled instances {(xi , yi )}i?L and a large set of unlabeled instances {xj }j?U , our task is to iteratively select the most informative set of b instances from U and add them into the labeled set L after querying their labels from a labeling system. In this section, we propose to conduct instance selective sampling using a maximum mutual information strategy which can then be formulated into a matrix partition problem. 3.1 Maximum Mutual Information Instance Selection Since the ultimate goal of active learning is to achieve a classifier with good generalization performance on unseen test data, it makes sense to select instances that can produce a labeled set that is most informative about the unseen test instances. Apparently it is not possible to access the unseen test data. Nevertheless, in active learning setting, we have a large number of unlabeled instances available that come from the same distribution as the future test instances. Thus we can select instances that lead to a labeled set which is most informative about the large set of unlabeled instances instead. We propose to use a mutual information criterion to measure the informativeness of the labeled set L over the unlabeled set U I(XL , XU ) = H(XL ) + H(XU ) ? H(XL , XU ) (3) where XL and XU denotes the labeled set of instances and the unlabeled set of instances respectively, H(?) denotes the entropy term. Both the mutual information measure and the entropy measure are defined on probability distributions [3]. We thus employ a Gaussian process framework (introduced in the previous section) to 3 model the joint probability distribution over all the instances. We first associate each instance xi with a random variable fi . Then the joint distribution over a finite number of instances XQ can be represented using the joint multivariate Gaussian distribution over variables fQ , which is given in (2). Thus the entropy term H(XQ ) = H(fQ ) can be computed using a closed-form solution  1 (4) H(fQ ) = ln (2?e)m |?QQ | 2 where m is the number of variables, i.e., the size of Q; ?QQ is the covariance matrix computed over XQ using a kernel function K given in (1). Within this Gaussian process framework, the mutual information criterion in (3) can be rewritten as = H(fL ) + H(fU ) ? H(fL , fU ) (5)    1 1 1 = ln (2?e)l |?LL | + ln (2?e)u |?U U | ? ln (2?e)t |?V V | 2 2 2 where V is the union of L and U ; l, u, t denote the sizes of L, U, V respectively such that l + u = t. Note that for a given data set, the overall number of instances does not change during the active learning process. We simply move b instances from the unlabeled set U into the labeled set L in each iteration. Thus the set V and the entropy term H(fL , fU ) are irrelevant to the instance selection. Based on this observation, our maximum mutual information instance selection strategy can be formulated as I(XL , XU ) Q? = arg max I(XL?Q , XU \Q ) = arg max ln |?L0 L0 | + ln |?U 0 U 0 | |Q|=b,Q?U |Q|=b,Q?U (6) where L0 = L?Q and U 0 = U \Q. This also suggests the mutual information criterion depends only on the covariance matrices computed using the kernel functions over the instances. Our maximum mutual information strategy attempts to select the batch of b instances from the unlabeled set U to label, to maximize the log determinants of the covariance matrices over the produced sets L0 and U 0 . 3.2 Matrix Partition Let ? be the covariance matrix over all the instances indexed by V = L ? U = L0 ? U 0 . Then the covariance matrices ?LL , ?U U , ?L0 L0 and ?U 0 U 0 are all submatrices of ?. Without losing any generality, we assume the instances are arranged in the order of [U, L], such that   ?U U ?U L (7) ?= ?LU ?LL The instance selection problem formulated in (6) selects a subset of b instances indexed by Q from U and moves them into the labeled set L. This problem is actually equivalent to partitioning matrix ? into submatrices ?L0 L0 , ?U 0 U 0 , ?L0 U 0 and ?U 0 L0 by reordering the instances in U . Since L is fixed, the actual matrix partition is conduct on covariance matrix ?U U . Now we define a permutation matrix M ? {0, 1}u?u such that M 1 = 1, M >1 = 1 where 1 denotes a vector of all 1 entries. We let M?b denote the first u ? b rows of M , and Mb denote the last b rows of M , such that M?b ?U U M?b> = ?U 0 U 0 , Mb ?U U Mb> = ?QQ Obviously Mb selects b instances from U to form Q. Let    Mb T = M?b O(u?b)?l , B = Ol?u Ob?l Il  (8) (9) where Om?n denotes a m ? n matrix with all 0 entries, and Il denotes a l ? l identity matrix. According to (8) we then have ?U 0 U 0 = T ?T > , ?L0 L0 = B?B > (10) Finally, the maximum mutual information problem given in (6) can be equivalently formulated into the following matrix partition problem max M s.t. ln |B?B > | + ln |T ?T > | M ? {0, 1}u?u , M 1 = 1, M > 1 = 1 4 (11) After solving this problem to obtain an optimal M ? , the instance selection can be determined from the last b rows of M ? , i.e., Mb? . However, the optimization problem (11) is an NP-hard combinatorial optimization problem over an integer matrix M . To facilitate a convenient optimization procedure, we relax the integer optimization problem (11) into the following upper bound optimization problem max M s.t. ln |B?B > | + ln |T ?T > | (12) 0 ? M ? 1, M 1 = 1, M > 1 = 1 (13) Note a determinant is a log concave function on positive definite matrices [1]. Thus ln |X| is concave in X. However, the quadratic matrix function X = B?B > is matrix convex given the matrix ? is positive definite. Thus the composition function ln |B?B > | is neither convex nor concave, but differentiable. In general, this type of problems are difficult global optimization problems. We develop an efficient local optimization technique to solve for a reasonable local solution instead. 3.3 First-order Local Optimization The target optimization (12) is an optimization problem over a u ? u matrix M , subject to the linear inequality and equality constraints (13). Here u is the number of unlabeled instances, and we typically assume it is a large number. Therefore a second-order optimization approach will be space demanding. We develop a first-order local maximization algorithm to conduct optimization, which combines a gradient direction finding method with a straightforward backtracking line search technique. This local optimization algorithm produced promising results in our experiments. The algorithm is an iterative procedure, starting from an initial matrix M (0) . Let M (k) denote the optimization variable values returned from the the kth iteration. At the (k + 1)th iteration, we approximate the objective function in (12) using its first-order Taylor series expansion at point M (k) g(M ) = ln |B?B > | + ln |T ?T > |   ? ln |B (k) ?B (k)> | + ln |T (k) ?T (k)> | + Tr G(M (k) )> (M ? M (k) ) (14) Where B (k) and T (k) denote the corresponding B and T matrices with their M submatrices fixed to values given by M (k) ; Tr denotes the trace operator; G(M (k) ) denotes the gradient matrix value at point M (k) . The gradient of the objective function g(M ) can be calculated using the matrix calculus, which gives the following results   dg(M ) = 2 (T ?T > )?1 T ? 1:(u?b),1:u dM?b   dg(M ) = 2 (B?B > )?1 B? 1:b,1:u G(Mb ) = dMb  > G(M ) = G(M?b )> , G(Mb )> G(M?b ) = (15) (16) (17) Note here we use notations in the matlab format where [X]i:j,m:n denotes the (j ?i+1)?(n?m+1) submatrix of X formed by entries between the ith to the jth rows and the mth to the nth columns. Given the gradient at point M (k) , we maximize the local linearization (14) to seek a gradient direction regarding the constraints. This leads to a convex linear optimization   f = arg max Tr G(M (k) )> M M (18) M s.t. 0 ? M ? 1, M 1 = 1, M > 1 = 1 The gradient direction for the (k + 1)th iteration can be determined as f ? M (k) . D=M (19) We then employ a backtracking line search to seek the optimal value M (k+1) to improve the original objective function g(M ) with g(M (k+1) ) > g(M (k) ). The line search procedure, 5 Algorithm 1 Matrix Partition Input: l: the number of labeled instances; u the number of unlabeled instances; ?: covariance matrix given in form of (7); b: batch size; M (0) ;  < 1e ? 8. Output: M ? Initialize k = 0, N oChange = f alse. repeat Set T and B according to equations (9) using the current M (k) . Compute gradient G(M (k) ) at point M (k) according to equations (15), (16) and (17). f. Solve the local linear optimization (18) for the given gradient to get M Compute the gradient ascend direction D using the equation (19). Compute M (k+1) = linesearch(D, M (k) ). if kM (k+1) ? M (k) k2 <  then NoChange=true. end if k = k+1. until N oChange is true or maximum iteration number is reached. M ? = M (k) . Algorithm 2 Heuristic Greedy Rounding Procedure Input: b, M ? (0, 1)b?u for b < u. c, Q. Output: M c as a b ? u matrix with all 0 entries. Initialize Let Q = ?, set M for k = 1 to b do Identify the largest value v = max(M (:)). Identify the indices (i, j) of v in M . c(i, j) = 1, M (i, :) = ?Inf, M (:, j) = ?Inf. Set Q = Q ? {j}, M end for linesearch(D, M (k) ), seeks an optimal step size, 0 ? s < 1, to update the M (k) in the ascending direction D given in (19), i.e. M (k+1) = M (k) + sD, guaranteeing the returned M (k+1) satisfies the linear constraints in (13), and leads to an objective value no worse than before. The overall algorithm for optimizing the matrix partition problem (12) is given in Algorithm 1. In our implementation, the constrained linear optimization (18) can be efficiently solved using an optimization software package CPLEX. When the number of unlabeled instances, u, is large, computing the log-determinant of the (u ? b) ? (u ? b) matrix, T ?T > , is likely to run into overflow or underflow. Instead of computing the log-determinant directly, we choose to compute it in an alternative efficient way. The key idea is based on the mathematical fact that the determinant of a triangular matrix equals the product of its diagonal elements. Hence, the matrix?s log-determinant is equal to the sum of their logarithm values. By keeping all computations in log-scale, the problem of underflow/overflow caused by product of many numbers can be effectively circumvented. For positive definite matrices, such as the matrices we have, one can use Cholesky factorization to first produce a triangular matrix and then compute the log-determinant of the original matrix using the logarithms of the diagonal values of the triangular matrix. The computation of log-determinants or matrix inverse in our algorithm are all conducted on matrices assumed to be positive definite. However, in order to increase the robustness of the algorithm and avoid numerical problems, we can add an additional ?I term to the matrices to guarantee the positive definite property. Here ? is a very small value and I is an identity matrix. By solving the matrix partition problem in (12) using Algorithm 1, an optimal matrix M ? is returned. However, this M ? contains continuous values. In order to determine which set of b instances to d? , while maintaining the permutation constraints select, we need to round M ? to a {0,1}-valued M > d? 1 = 1 and M d? 1 = 1. We use a simple heuristic greedy procedure to conduct the rounding. In M this procedure, we focused on rounding the last b rows, Mb? , since they are the ones used to pick b instances for labeling. The procedure is described in Algorithm 2, which returns the indices of the selected b instances as well. 6 4 Experiments To investigate the empirical performance of the proposed batch-mode active learning algorithm, we conducted two sets of experiments on a few UCI datasets and the 20 newsgroups dataset. Note the proposed active learning method is in general independent of the specific classification model employed. For the experiments in this section, we used logistic regression as its classification model to evaluate the informativeness of the selected labeled instances. We compared the proposed approach, denoted as Matrix, with three discriminative batch-mode active learning methods proposed in the literature: svmD, an approach that incorporates diversity in active learning with SVMs [2]; Fisher, an approach that uses Fisher information matrix based on logistic regression classifiers for instance selection [8]; Discriminative, a discriminative optimization approach based on logistic regression classifiers [6]. We have also compared our approach to one transductive experimental design method which is formulated from regression problems and whose instance selection process is independent of evaluation classification models [15]. We used the sequential design code downloaded from the authors? webpage and denote this method as Design. First, we conducted experiments on seven UCI datasets. We consider a hard case of active learning, where we start active learning from only a few labeled instances. In each experiment, we start with two randomly selected labeled instances, one in each class. We then randomly select 2/3 of the remaining instances as the unlabeled set, using all the other instances for testing. All the algorithms start with the same initial labeled set, unlabeled set and testing set. For a fixed batch size b, each algorithm repeatedly select b instances to label each time and evaluate the produced classifier on testing data after each new labeling, with maximum 110 instances to select in total. The experiments were repeated 20 times. In Table 1, we report the experimental results with b = 10, comparing the proposed Matrix algorithm with each of the three batch-mode alternatives. With b = 10, there are totally 11 evaluation points, with 20 results on each of them. We therefore run a 2-sided paired t-test at each evaluation point to compare the performance of each pair of algorithms. The ?win%? denotes the percentage of evaluation points where the Matrix algorithm outperforms the specified algorithm using a 2-sided paired t-test at the level of p<0.05; the ?lose%? denotes the percentage of evaluation points where the specified algorithm outperforms the Matrix algorithm. The ?overall? nevertheless show the comparison results using a single 2-sided paired t-test on all 220 results. These results show that the proposed active learning method, Matrix, overperformed svmD, Fisher and Design on most data sets, except an overall lose to svmD on pima, a tie with Fisher and Design on hepatitis, and a tie with Design on flare. Matrix is mostly tied with Discriminative on all data sets, with a slight pointwise win on crx and a slight overall lose on german. Although Matrix and Discriminative demonstrated similar performance, the proposed Matrix is more efficient regarding running time on relatively big data sets. The comparison in running times over 20 repeats are reported in Table 2. Table 1: Comparison of the active learning algorithms on UCI data with batch size = 10. These results are based on 2-sided paired t-test at the level of p< 0.05. Matrix vs svmD Matrix vs Fisher Matrix vs Discriminative Matrix vs Design win% lose% overall win% lose% overall win% lose% overall win% lose% overall cleve 63.6 0 win 45.5 0 win 0 0 tie 90.9 0 win 27.3 0 win 9.1 0 win 9.1 0 tie 90.9 0 win crx flare 54.5 0 win 100.0 0 win 0 0 tie 36.4 9.1 tie 0 win 9.1 0 win 0 0 lose 72.7 0 win german 81.8 63.6 0 win 36.4 0 win 0 0 tie 100.0 0 win heart hepatitis 100.0 0 win 33.3 0 tie 0 0 tie 0 0 tie 0 0 lose 100.0 0 win 0 0 tie 81.8 0 win pima Data set Table 2: Average running time (in minutes) Method Matrix Discriminative cleve 8.37 3.33 crx 6.14 61.44 flare 9.53 220.12 7 german 22.08 285.65 heart 5.68 2.40 hepatitis 0.12 0.08 pima 60.11 68.27 Table 3: Comparison of the active learning algorithms on Newsgroup data with batch size = 20. These results are based on 2-sided paired t-test at the level of p< 0.05. Matrix vs svmD Matrix vs Fisher Matrix vs Random Matrix vs Design win% lose% overall win% lose% overall win% lose% overall win% lose% overall Autos 86.7 0 win 20.0 6.6 tie 73.3 6.6 win 80.0 6.7 win 0 win 0 0 tie 13.3 0 win 86.7 0 win Hardware 100.0 86.7 6.6 win 20.0 13.3 tie 46.7 0 win 80.0 6.7 win Sport Data set Next we conducted experiments on 20 newsgroups dataset for document categorization. We build three binary classification tasks: (1) Autos: rec.autos (987 documents) vs. rec.motorcycles (993 documents); (2) Hardware: comp.sys.ibm.pc.hardware (979 documents) vs. comp.sys.mac.hardware (958 documents); (3) Sport: rec.sport.baseball (991 documents) vs. rec.sport.hockey (997 documents). Each document is first minimally processed into a ?tf.idf? vector. We then select the top 400 features to use according to their total ?tf.idf? frequencies in all the documents for the considered task. In each experiment, we start with four randomly selected labeled instances, two in each class. We then randomly select 1000 instances (500 from each class) from the remaining ones as the unlabeled set, using all the other instances for testing. All the algorithms start with the same initial labeled set, unlabeled set and testing set. For a fixed batch size b, each algorithm repeatedly select b instances to label each time with maximum 300 instances to select in total. In this section, we report the experimental results with b = 20 averaged over 20 times repetitions. There are 300/20 = 15 evaluation points in this case. Note the unlabeled sets used for this set of experiments are much larger than the ones used for experiments on UCI datasets. This substantially increases the searching space of instance selection. One consequence in our experiments is that the Discriminative algorithm becomes very slow. Thus we were not able to produce comparison results for this algorithm. The proposed Matrix method was affected as well. However, we coped with this problem using a subsampling assisted method, where we first select a subset of 400 instances from the unlabeled set and then restrain our instance selection to this subset. This is equivalent to solving the matrix partition optimization in (12) with additional constraints on Mb , such that the columns of Mb corresponding to instances outside of this subset of 400 instances are all set to 0. For the experiments, we chose the 400 instances as the ones with top entropy terms under the current classification model. The same subsampling was used for the method Design as well. Table 3 shows the comparison results on the three document categorization tasks, comparing Matrix to svmD, Fisher, Design and a baseline random selection, Random. These results show the proposed Matrix outperformed svmD, Design and Random. It tied with Fisher regarding overall measure, but had a slight win regarding pointwise measure. These empirical results suggest that selecting unlabeled instances independent of the classification model using the proposed matrix partition method can achieve reasonable performance, which is better than a transductive experimental design method and comparable to the discriminative batchmode active learning approaches. However, our approach can offer certain conveniences in some circumstances where one does not know the classification model to be employed for classification. 5 Conclusions In this paper, we propose a novel batch-mode active learning approach that makes query selection decisions independent of the classification model employed. The proposed approach is based on a general maximum mutual information principle. It is formulated as a matrix partition optimization problem under a Gaussian process framework. To tackle the formulated combinatorial optimization problem, we developed an effective local optimization technique. Our empirical studies show the proposed flexible batch-mode active learning approach can achieve comparable or superior performance to discriminative batch-mode active learning methods that have been optimized on specific classifiers. A future extension for this work is to consider batch-mode active learning with structured data by exploiting different kernel functions. 8 References [1] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [2] K. Brinker. Incorporating diversity in active learning with support vector machines. In Proceedings of International Conference on Machine learning, 2003. [3] T. Cover and J. Thomas. Elements of Information Theory. John Wiley & sons, 1991. [4] C. Guestrin, A. Krause, and A. Singh. Near-optimal sensor placements in Gaussian processes. In Proceedings of International Conference on Machine Learning, 2005. [5] Y. Guo and R. Greiner. Optimistic active learning using mutual information. In Proceedings of International Joint Conference on Artificial Intelligence, 2007. [6] Y. Guo and D. Schuurmans. Discriminative batch mode active learning. In Proceedings of Neural Information Processing Systems, 2007. [7] S. Hoi, R. Jin, and M. Lyu. Large-scale text categorization by batch mode active learning. In Proceedings of the International World Wide Web Conference, 2006. [8] S. Hoi, R. Jin, J. Zhu, and M. Lyu. Batch mode active learning and its application to medical image classification. In Proceedings of International Conference on Machine Learning, 2006. [9] S. Hoi, R. Jin, J. Zhu, and M. Lyu. Semi-supervised SVM batch mode active learning for image retrieval. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2008. [10] A. Krause, C. Guestrin, A. Gupta, and J. Kleinberg. Near-optimal sensor placements: Maximizing information while minimizing communication cost. In International Symposium on Information Processing in Sensor Networks, 2006. [11] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [12] G. Schohn and D. Cohn. Less is more: Active learning with support vector machines. In Proceedings of International Conference on Machine Learning, 2000. [13] B. Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin?Madison, 2009. [14] Z. Xu, K. Yu, V. Tresp, X. Xu, and J. Wang. Representative sampling for text classification using support vector machines. In European Conference on Information Retrieval, 2003. [15] K. Yu and J. Bi. Active learning via transductive experimental design. In In Proceedings of the International Conference on Machine Learning, 2006. 9
3919 |@word determinant:9 retraining:2 tedious:1 calculus:1 km:1 seek:4 covariance:14 pick:1 tr:3 initial:3 series:2 contains:1 selecting:3 crx:3 document:11 past:1 outperforms:2 current:2 comparing:2 attracted:1 written:1 john:1 numerical:1 partition:17 informative:8 update:1 v:11 greedy:3 selected:8 intelligence:1 flare:3 sys:2 ith:1 provides:3 mathematical:1 become:2 symposium:1 combine:1 ascend:1 expected:1 nor:1 multi:1 ol:1 actual:1 positivedefinite:1 totally:1 becomes:1 notation:1 substantially:1 developed:3 finding:1 impractical:1 guarantee:1 every:1 concave:3 tackle:1 interactive:1 tie:14 classifier:10 k2:2 control:1 partitioning:1 medical:1 positive:5 before:1 local:14 treat:1 sd:1 coped:1 aiming:1 consequence:1 might:1 chose:1 minimally:1 studied:1 suggests:1 factorization:1 bi:1 averaged:1 testing:5 union:1 definite:5 procedure:7 area:1 empirical:6 submatrices:3 convenient:1 boyd:1 suggest:1 get:1 convenience:1 unlabeled:28 selection:19 close:1 operator:1 equivalent:2 demonstrated:1 maximizing:3 straightforward:2 attention:1 starting:1 williams:1 convex:4 focused:2 survey:1 vandenberghe:1 retrieve:1 searching:1 qq:6 construction:1 target:1 losing:1 us:1 pa:1 associate:1 element:2 expensive:1 recognition:1 rec:4 labeled:19 solved:1 wang:1 trained:1 batchmode:2 solving:3 singh:1 baseball:1 joint:6 various:1 represented:1 effective:3 query:11 artificial:1 labeling:9 outside:1 whose:2 heuristic:4 widely:2 solve:2 valued:1 larger:1 relax:1 triangular:3 statistic:1 unseen:3 schohn:2 transductive:3 obviously:1 differentiable:1 propose:7 mb:11 product:2 remainder:2 uci:4 combining:1 motorcycle:1 achieve:5 exploiting:3 webpage:1 cluster:1 produce:3 categorization:3 guaranteeing:1 develop:2 dividing:1 come:1 direction:5 human:1 settle:1 hoi:5 require:2 generalization:4 preliminary:1 extension:1 assisted:1 proximity:1 sufficiently:1 considered:1 normal:1 exp:4 lyu:3 outperformed:1 lose:13 combinatorial:4 label:8 largest:1 repetition:1 tf:2 mit:1 sensor:3 gaussian:28 avoid:1 l0:13 properly:1 fq:6 hepatitis:3 centroid:1 baseline:1 sense:1 brinker:2 typically:1 mth:1 selective:2 selects:4 overall:16 classification:14 arg:3 flexible:1 denoted:1 constrained:1 initialize:2 mutual:20 equal:2 construct:2 sampling:6 svmd:7 yu:2 future:2 np:3 report:3 few:3 employ:2 randomly:5 dg:2 individual:2 cplex:1 attempt:2 highly:1 investigate:1 evaluation:6 introduces:1 mixture:1 pc:1 fu:3 conduct:4 indexed:2 taylor:2 logarithm:2 abundant:1 instance:88 column:2 modeling:1 linesearch:2 temple:2 cover:1 maximization:2 cost:1 mac:1 subset:5 entry:5 rounding:3 conducted:4 reported:1 kxi:1 international:8 pool:2 dmb:1 choose:1 worse:1 inefficient:2 return:1 account:3 potential:1 diversity:5 explicitly:1 caused:1 depends:1 later:1 lot:1 closed:1 optimistic:3 apparently:1 reached:1 start:5 parallel:1 om:1 il:2 formed:1 efficiently:1 identify:3 produced:3 lu:1 comp:2 researcher:1 history:1 manual:1 frequency:1 dm:1 workstation:1 dataset:2 knowledge:1 organized:1 sophisticated:1 actually:1 supervised:3 arranged:1 generality:1 furthermore:1 until:1 working:1 web:1 cohn:2 mode:25 logistic:5 facilitate:1 true:2 equality:1 hence:1 symmetric:1 iteratively:2 round:1 ll:3 during:2 criterion:5 image:3 novel:4 recently:1 fi:1 superior:3 overview:1 extend:1 slight:3 composition:1 cambridge:1 had:1 access:1 add:2 multivariate:5 optimizing:1 irrelevant:2 inf:2 scenario:1 certain:1 inequality:1 binary:2 yi:1 guestrin:2 additional:2 relaxed:2 employed:8 determine:1 maximize:3 redundant:1 semi:2 multiple:4 technical:1 offer:1 long:1 retrieval:2 paired:5 variant:1 regression:6 circumstance:2 vision:1 iteration:8 kernel:8 normalization:1 whereas:1 addition:1 krause:2 addressed:1 unlike:1 subject:1 incorporates:2 integer:2 near:2 easy:2 newsgroups:2 xj:6 reduce:1 idea:2 regarding:4 motivated:1 ultimate:2 effort:2 returned:3 repeatedly:2 matlab:1 amount:1 hardware:4 svms:3 processed:1 percentage:2 diverse:1 affected:1 express:1 key:1 four:1 restrain:1 nevertheless:3 wasteful:1 neither:1 sum:1 run:2 package:1 inverse:1 reasonable:2 decision:4 ob:1 comparable:4 submatrix:1 fl:3 internet:1 bound:1 cleve:2 quadratic:1 assemble:1 placement:2 constraint:5 idf:2 x2:1 software:1 kleinberg:1 min:1 format:1 relatively:1 circumvented:1 department:1 structured:1 according:5 son:1 alse:1 sided:5 heart:2 ln:16 resource:1 equation:3 german:3 know:1 ascending:1 end:2 adopted:1 available:2 rewritten:1 appropriate:1 batch:39 alternative:2 robustness:1 original:2 thomas:1 denotes:13 remaining:3 clustering:1 ensure:1 running:3 top:2 subsampling:2 maintaining:1 madison:1 exploit:3 build:1 overflow:2 society:1 objective:5 move:2 strategy:5 diagonal:2 gradient:9 kth:1 win:37 seven:1 considers:1 code:1 index:3 pointwise:2 minimizing:1 equivalently:1 difficult:2 mostly:1 pima:3 trace:1 implementation:1 design:13 upper:1 observation:1 datasets:3 finite:3 jin:3 situation:1 communication:1 community:1 introduced:1 pair:1 specified:2 optimized:2 learned:1 address:1 able:1 pattern:1 challenge:1 max:7 overlap:1 demanding:1 natural:1 nth:1 zhu:2 scheme:1 improve:1 concludes:1 auto:3 philadelphia:1 tresp:1 xq:6 text:2 literature:4 wisconsin:1 reordering:1 permutation:2 querying:1 annotator:2 downloaded:1 gather:1 informativeness:5 principle:2 ibm:1 row:5 repeat:2 last:3 keeping:1 rasmussen:1 jth:1 appreciated:1 guide:1 wide:1 taking:1 boundary:1 dimension:1 calculated:1 world:1 author:1 employing:2 approximate:1 implicitly:1 global:1 active:51 assumed:1 consuming:1 discriminative:15 xi:9 continuous:4 search:4 iterative:1 decade:1 table:6 hockey:1 promising:1 schuurmans:2 expansion:2 complex:1 european:1 big:1 repeated:1 xu:9 representative:2 slow:2 wiley:1 experienced:1 fails:1 xl:6 lie:1 tied:2 minute:1 specific:3 yuhong:2 xt:1 svm:3 gupta:1 incorporating:1 sequential:1 effectively:2 linearization:2 entropy:5 backtracking:3 simply:2 likely:1 greiner:1 contained:1 sport:4 underflow:2 satisfies:1 goal:2 formulated:9 viewed:1 identity:2 towards:1 fisher:9 hard:4 change:1 specifically:2 infinite:1 typical:1 determined:2 hyperplane:1 except:1 total:3 experimental:5 newsgroup:1 select:19 cholesky:1 guo:4 support:3 evaluate:2
3,223
392
Time Trials on Second-Order and Variable-Learning-Rate Algorithms Richard Rohwer Centre for Speech Technology Research Edinburgh University 80, South Bridge Edinburgh EH 1 1HN, SCOTLAND Abstract The performance of seven minimization algorithms are compared on five neural network problems. These include a variable-step-size algorithm, conjugate gradient, and several methods with explicit analytic or numerical approximations to the Hessian. 1 Introduction There are several minimization algorithms in use which in the the ith coordinate Xi in the direction S~+l Vf = ;:.1 , = r~s~ " + h~V~ " nth iteration vary (1) is the ith component of the gradient of the error measure E ? z .. at zn, sO = V O, and rn and h n are chosen differently in different algorithms. Algorithms also use various methods for choosing the step size .,.,n to be taken along direction sn. In this study, 7 algorithms were compared on a suite of 5 neural network problems. These algorithms are defined in table 1. where 1.1 The algorithms The algorithms investigated are Silva and Almeida's variable-step-size algorithm (Silva, 1990) which closely resembles Toolenaere's "SuperSAB" algorithm (Toole- 977 978 Rohwer r--r---r---------r---r-----r-----r--------.--------,---------'r----, ... ~~I ... ~~I ... ~~~I - - - ~":~~o -000 0-00-00-00 ~.~ II II II II II II II II II II II II II II II t ~ :; i ~ ;:! II II II II II II II II II ~ ~ ~-%.: 0& ...I ...I ~ .-.. ~ .-.. ~ .-.. I I ...I 000 1\ II V ......... I I I t;-t;-t;" ~ - - ~ ~i>-~ ' -"' + + ... ... 1\ VI ~ I ' -"' '-' Es ~ -- ... -~- ~ .-.. '---" ~ ~ II -+' I II i> ~- + I ~--~------4--+---_+----~---~------+_-----~1 ~~ ::-. + ... , V 1\1 ...... oQ . to; ' " ~Q + + -::-' , ~~ ' G I>~ -;- ... .... - ...+ - 5::...+ ~~- - c~ ~--~--------r_--~----~----~------_r--------r_--------~ II o o o o o naere, 1990), conjugate gradient (Press, 1988), and 5 variants of an algorithm advocated by LeCun (LeCun, 1989), which employs an analytic calculation of the diagonal terms of the matrix of second derivatives. (Algorithms involving an approximation of the full Hessian, the inverse of the matrix of second derivatives, were studied by Watrous (Watrous, 1987).) In 4 of these methods the gradient is divided component-wise by a decaying average of either the second derivatives or their absolute values. Dividing by the absolute values assures that s . V < 0, and reflects the philosophy that directions with high curvature, be it positive or negative, are Time Trials on Second-Order and Variable-Learning-Rate Algorithms not good ones to follow because the quadratic approximation is likely to break down at short distances. In the remaining method, sketched in (Rohwer, 1990a,b), the gradient is divided componentwise by the maximum of the absolute values of an analytic and numerical calculation of the second derivitives. Again the philosopy is that curvature is to be avoided. The numerical calculation may detect evidence of nearby high curvature at a point where the analytic calculation finds low curvature. Some algorithms conventionally use a multi-step I-dimensional "linesearch" to determine how far to proceed in direction 8, whereas others take a single step according to some formula. A linesearch guarantees descent (more precisely, non-ascent), which is beneficial if local minima pose no threat. Table?? shows the step-size methods used in this study; the decisions are rather arbitrary. The theoretical basis of the conjugate gradient method is lost if exact linesearches are not used, but it is lost anyway on any non-quadratic function. Silva and Toolenaere's use a single-step method which guarantees descent by retracting any step which does not produce ascent. The method is not a linesearch however, because the step following a retracted step will be in a different direction. Space limitations prohibit a detailed specification of the of the linesearch algorithm and the convergence criteria used. These details may be very important. A longer paper is planned in which they are to be specified, and their influence on performance studied. 1.2 The test problems Two types of problems are used in these tests. One is a strictly-layered 3-layer back propagation network in which the minimization variables are the weights. The test problems are 4-bit parity using 4 hidden nodes, auto-association of 10-bit random patterns using 7 hidden nodes, and the Peterson and Barney vowel classification problem (Peterson, 1952), which uses 2 inputs, 10 hidden nodes, and 10 target nodes. The other type is a fully connected recurrent network trained by the Moving Targets method (Rohwer, 1990a,b). In this case the minimization variables are the weights and the moving targets, which can be regarded as variable training data for the hidden nodes. The limit cycle switching problem and the 100-step context sensitivity problem from these references are the test problems used. In the limitcycle switching problem, a single target node is required to regularly generate pulses of width proportional to a 2-bit binary number indicated by 2 input nodes. In the 100-step context problem, the training data always has an input pulse at time step 100, and sometimes has an input pulse at time O. The target node is required to turn on at time 100 if and only if there was an input pulse at time O. Each method is tested on each problem with 10 different random initial conditions, except for the parity problem which was done with 100 different initial conditions. 1.3 Unconventional nonlinearity An unconventional form of nonlinearity was used in these tests. The usual /(x) = 1/(1 + e-~) presents difficulties when x - ?oo because its derivative becomes very small. This makes the system learn slowly if activations become large. Also, numerical noise becomes serious if expressions such as /(x)(l- /(x)) are used in the derivative calculations. Various cutoff schemes are sometimes used to prevent these problems, but these introduce discontinuities and/or incorrect derivative 979 980 Rohwer q ?"" -)( 0 ci -20 -10 0 10 20 x Figure 1: The nonlinearity used. calculations which present further problems for second-derivative methods. In early work it was found that algorithm performance was highly sensitive to cutoff value (More systematic work on this subject is wanting.), so an entirely different nonlinearity was introduced which is bounded but has reasonably large derivatives for most arguments. This combination of properties can only be had with an oscillatory function. It was also desired to retain the property of 1/(1 + e-~) that it has large "saturated regions" in which it is approximately constant. The function used is f(x) = with 2 (l' ~ + 2(1 ~ {3) (1 + {3sin(;:)2)sin(; sin(; sin(;:))) (2) = 10 and {3 = 0.02. This function is graphed in figure 1. Results An algorithm is useful if it produces good solutions quickly. The data for each algorithm-problem pair is divided into separate sets for successful and unsuccessful runs. Success is defined rather arbitrarily as less than 1% error on any target node for all training data in the backpropagation problems. In the Moving Target problems, it is defined in terms of the maximum error on any target node in the freely-running network, the threshold being 5% for the 4-limit-cycle problem and 10% for the 100-step-context problem. The speed data, measured in number of gradient evaluations, is presented in figure 2, which contains 4 tables, one for each problem except random autoassociation. A maximum of 10000 evaluations was allowed. Each table is divided into 7 columns, one for each algorithm. From left to right, the algorithms are Rohwer's algorithm (max....abs), conjugate gradient (cg), division by unsigned (an-8.bs) or signed (an-Bgn) analytically computed second derivatives and using a linesearch, these two with the linesearch replaced by a single variably-sized step (an_abs-Bs and an-sgn-Bs) and Silva's algorithm (silva..ss). The data in each of these 7 columns is divided into 3 sub columns, the first (a) shows all data points, the second (s) shows data for successful runs only, and the third (f) shows data for the failures. Each error bar shows the mean and standard deviation of the data in its column. The all-important little boxes at the base of each column show the proportions of runs in that column's category. Gradient evaluations Moving targets, 4 11m II cycles B ~ 8 II 8 Ii f .... ts r II n ? IQi ... --- --- m__ IbI ~ .... .. ........ ? ? I .. .. I~II .~~ I~~ - - - cg ?? . . ~ I B .. N J 1 ~ ? it it 10. 10. 101 ?l? II. In_liSft lsi .., ____?? ' an_19"_I.' _hod ('I) . . . . 811M __ m__IbI ? s I ? ? I cg 1"_lbI I. I., "'_lOn ?? t an_It.__ ?? f I.t mel hod tiNa __ ? ? I ::s Gradient evaluations Moving targets, 100-step context (") 0 S ~ JJ '0 ~ 8 ~ Gradient evaluations ? II 51 I ~ II 0 ::s C/l ~ t ~~ ? ~ ! t~ . . ? ???? ?ajk'- .~ .1 !a~ n! I~i 101 101 101 101 101 malt_It. ? ? I cg ?I .: . ?. ?. ?. .. ,: i I 0 i i "'_M ???? , In_IV" ?? , mol hod aR_IbI_.. I.' I ~ ? . I : :' lOll si"'-__ f m 1i~~ . . ?I .? I liD m__IbI In_I9" __ ?? 8 ? .. ? ? I ?. ! ! ? . ?I .I ? . ? ? I knh I~~ i I . liQ ,I : I I ? I - cg In_aba ???? f ?. ?.. : .. ?? ! : . I li~ III~ melhod -"_lbI__ ?? f lot Q. ~ ? I 101 -~ ~ nI ?? .. ?. ?. 3s? CJ'CI I ~ :? .~ P! nI liD _IIM __ "us9n ?? f 6- o ~.... ?. ?. ?? .. ? en Q. . I ~ ::s o ~ 4-4-1 parity - ....~ o ------ an_19"_- S nI n 101 101 101 101 101 101 101 --- - .... ~ nI .. .. . . - In_lbe 1.1 I ? ? ? ? ~ ? ?. II II ~ ? . ?. ('I) Q ~ ? . ~ ~ - --- ?. ~ ~ ... Gradient evaluations 2-10-10 Peterson and Barney In_ag" __ 1.1 ? ? f ~ CJ'CI olot f fI) \D <Xl i-l \C (X) N ~ ~ 1-step error any node Moving targets, .. limit cycles ~ iI "! c .... ..., = en ~ ()q ~ "" !I ~ C ~ C d ~ ~ ...,en ~ .. . .. ...... .... .. ~ ? Oi m... ... ?? 0' ..., 1 .~Iii .~~ .~ .. C9 ?? S ~ en ".IV" ?? r 1? .01 U .0. .0. ..._... en.IV"." ?? r ?? r ~ ~ .... S = ? ...... r ?? 1? Max 1-slep error any node Moving targets, 1OO-step context ("') _. ~ ~ i~ E "I C :;; Ii",".," If If d . ... t: ! I d ~ . . . . . . . . .. _.. C9 ...... ???? f "! C :1_ J .0? 1.0 C9 .,__ 1"_159" .,___.. en..-gn_.. .!wi__ ?? r . 1 ?? r ?? r ?? r m..__ 1.1 ?? t ?? f .01 .~ii mu__ ""'hod -.'" ?? f 4-4-1 parity ? . ?? .. .r . . . U.0.H?J0.J .0.J J .0. ... -... .".IV"." ?? , ?? f .... B C ~ ".IV" ?? f m.,hod . E 11 :r :r Final sum-sq error II ~ .0. .0. .0. .0. .01 .0. .0? ........ ?? t ?? r 1. 1. .. ~ U ~ I ....hod t:S t:S ? . ? . ~ c C ...,0 ---- ~ "I en .... Test set mlsclasslflcatlons 2-10-10 Peterson and Barney .... ~ ? ~ en ...,..., 0..., S . . 8- . . ? . ? . u ..... ? . I I ? "{ ? tt 1_ . .iQ C9 ?? 1 . ? . ...- ???? 1 1_ : d~ - I lii~ .II~ ".IV" f ma!hod ?? !~ L I _I IDII liD ......... ".IV"." ?? t ?? t ....... ?? 1 Time Trials on Second-Order and Variable-Learning-Rate Algorithms The success criteria are quite arbitrary and innapropriate in many cases, so more detailed information on the quality of the solutions is given in Table 3. The maximum error on any target node after one time step, given the moving target values on the previous time step is shown for the Moving Target problems. Test set misclassifications are shown for the Peterson and Barney data, and final sum-squared error is shown for the parity problem. The random autoassociation results are omitted here to save space. They qualitatively resemble the Peterson and Barney results. Firm conclusions cannot be drawn, but the linesearch-based algorithms tend to outperform the others. Of these, the conjugate gradient algorithm and Rohwer's algorithm (Rohwer 1990a,b) are usually best. In recent correspondence with the author, Silva has suggested small changes in his algorithm. In particular, when the algorithm fails to find descent for 5 consecutive iterations, all the learning-rate parameters are halved . Preliminary tests suggest that this change may bring enormous improvements. Acknowledgements This work was supported in part by ESPRIT Basic Research Action 3207 ACTS. References Y. LeCun, et. al. (1989) Generalization and network design strategies. In R. Pfeifer, (ed.), Connectionism in Perspective, 143-155. Amsterdam: North Holland. G. E. Peterson & H. L. Barney. (1952) Control methods used in a study of vowels. J. Acoustical Soc. of America 24:175-184. W. H. Press, et. al. (1988) Numerical Recipes in C: The Art of Scientific Computing. Cambridge: Cambridge U. Press R. Rohwer. (1990a) The 'moving targets' training algorithm. In L. B. Almeida & C. J. Wellekens (eds), Neural Networks, Lecture Notes in Computer Science 412:100-109. Berlin: Springer-Verlag. R. Rohwer. (1990b) The 'moving targets' training algorithm. In D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2:558-565. San Mateo CA: Morgan Kaufmann. F. M. Silva & L. B. Almeida. (1990). Acceleration techniques for the backpropagation algorithm. In L. B. Almeida & C. J. Wellekens (eds), Neural Networks, Lecture Notes in Computer Science 412:110-119. Berlin: Springer-Verlag. T. Toolenaere. (1990) SuperSAB: Fast Adaptive Back Propagation with Good Scaling Properties. Neural Networks 3(5):561-574. R. Watrous. (1987) Learning algorithms for connectionist networks: Applied gradient methods of nonlinear optimization. In Caudill & Butler (eds.), IEEE Inti. Conf. on Neural Networks, 11:619-627. San Diego: IEEE. 983
392 |@word trial:3 proportion:1 pulse:4 barney:6 initial:2 contains:1 activation:1 si:1 numerical:5 analytic:4 scotland:1 ith:2 short:1 node:13 five:1 along:1 become:1 loll:1 incorrect:1 introduce:1 multi:1 little:1 becomes:2 bounded:1 watrous:3 suite:1 guarantee:2 act:1 esprit:1 control:1 positive:1 iqi:1 local:1 limit:3 switching:2 approximately:1 signed:1 ibi:1 resembles:1 studied:2 mateo:1 autoassociation:2 lecun:3 lost:2 backpropagation:2 sq:1 j0:1 suggest:1 cannot:1 layered:1 unsigned:1 context:5 influence:1 regarded:1 his:1 anyway:1 coordinate:1 target:17 diego:1 exact:1 us:1 variably:1 region:1 connected:1 cycle:4 retracting:1 trained:1 division:1 basis:1 differently:1 various:2 america:1 fast:1 choosing:1 firm:1 quite:1 s:1 final:2 recipe:1 convergence:1 produce:2 oo:2 recurrent:1 iq:1 pose:1 measured:1 advocated:1 dividing:1 soc:1 resemble:1 direction:5 closely:1 sgn:1 generalization:1 preliminary:1 connectionism:1 strictly:1 vary:1 early:1 consecutive:1 omitted:1 bridge:1 sensitive:1 reflects:1 minimization:4 always:1 rather:2 r_:2 lon:1 improvement:1 bgn:1 cg:5 detect:1 hidden:4 sketched:1 classification:1 art:1 others:2 connectionist:1 richard:1 employ:1 serious:1 replaced:1 vowel:2 ab:1 highly:1 evaluation:6 saturated:1 iv:6 desired:1 theoretical:1 column:6 linesearch:7 planned:1 zn:1 deviation:1 successful:2 slep:1 sensitivity:1 retain:1 systematic:1 quickly:1 again:1 squared:1 hn:1 slowly:1 conf:1 lii:1 derivative:9 li:1 north:1 vi:1 break:1 lot:1 decaying:1 oi:1 ni:4 kaufmann:1 oscillatory:1 touretzky:1 ed:5 rohwer:10 failure:1 cj:2 back:2 follow:1 done:1 box:1 nonlinear:1 propagation:2 quality:1 indicated:1 scientific:1 graphed:1 analytically:1 sin:4 width:1 prohibit:1 mel:1 criterion:2 tt:1 bring:1 silva:7 wise:1 fi:1 limitcycle:1 lbi:1 association:1 cambridge:2 centre:1 nonlinearity:4 had:1 moving:11 specification:1 longer:1 base:1 curvature:4 halved:1 recent:1 perspective:1 verlag:2 binary:1 success:2 arbitrarily:1 morgan:1 minimum:1 freely:1 determine:1 ii:43 full:1 calculation:6 divided:5 variant:1 involving:1 basic:1 iteration:2 sometimes:2 whereas:1 ascent:2 south:1 subject:1 tend:1 regularly:1 oq:1 iii:2 misclassifications:1 expression:1 speech:1 hessian:2 proceed:1 jj:1 action:1 useful:1 detailed:2 liq:1 category:1 generate:1 outperform:1 lsi:1 threat:1 threshold:1 enormous:1 drawn:1 prevent:1 cutoff:2 sum:2 run:3 inverse:1 decision:1 scaling:1 vf:1 bit:3 entirely:1 layer:1 correspondence:1 quadratic:2 precisely:1 nearby:1 speed:1 argument:1 according:1 combination:1 conjugate:5 beneficial:1 lid:3 b:3 inti:1 taken:1 wellekens:2 assures:1 turn:1 unconventional:2 save:1 supersab:2 remaining:1 tina:1 include:1 running:1 strategy:1 usual:1 diagonal:1 gradient:14 distance:1 separate:1 berlin:2 seven:1 acoustical:1 negative:1 design:1 descent:3 t:1 rn:1 arbitrary:2 toole:1 introduced:1 pair:1 required:2 specified:1 componentwise:1 discontinuity:1 bar:1 suggested:1 usually:1 pattern:1 unsuccessful:1 max:2 difficulty:1 eh:1 wanting:1 nth:1 caudill:1 scheme:1 technology:1 conventionally:1 auto:1 sn:1 acknowledgement:1 fully:1 lecture:2 limitation:1 proportional:1 supported:1 parity:5 iim:1 peterson:7 absolute:3 edinburgh:2 author:1 qualitatively:1 adaptive:1 san:2 avoided:1 far:1 retracted:1 xi:1 butler:1 table:5 learn:1 reasonably:1 ca:1 mol:1 investigated:1 noise:1 allowed:1 en:8 sub:1 fails:1 explicit:1 xl:1 third:1 pfeifer:1 down:1 formula:1 evidence:1 ci:3 hod:7 likely:1 amsterdam:1 holland:1 springer:2 linesearches:1 ma:1 sized:1 acceleration:1 ajk:1 change:2 except:2 e:1 almeida:4 philosophy:1 c9:4 tested:1
3,224
3,920
Reverse Multi-Label Learning James Petterson NICTA, Australian National University Canberra, ACT, Australia [email protected] Tiberio Caetano NICTA, Australian National University Canberra, ACT, Australia [email protected] Abstract Multi-label classification is the task of predicting potentially multiple labels for a given instance. This is common in several applications such as image annotation, document classification and gene function prediction. In this paper we present a formulation for this problem based on reverse prediction: we predict sets of instances given the labels. By viewing the problem from this perspective, the most popular quality measures for assessing the performance of multi-label classification admit relaxations that can be efficiently optimised. We optimise these relaxations with standard algorithms and compare our results with several stateof-the-art methods, showing excellent performance. 1 Introduction Recently, multi-label classification (MLC) has been drawing increasing attention from the machine learning community (e.g., [1, 2, 3, 4]). Unlike in the case of multi-class learning, in MLC each instance may belong to multiple classes simultaneously. This reflects the situation in many realworld problems: in document classification, one document can cover multiple subjects; in biology, a gene can be associated with a set of functional classes [5]; in image annotation, one image can have several tags [6]. As diverse as the applications, however, are the evaluation measures used to assess the performance of different methods. That is understandable, since different applications have different goals. In e-discovery applications [7] it is mandatory that all relevant documents are retrieved, so recall is the most relevant measure. In web search, on the other hand, precision is also important, so the F1 -score, which is the harmonic mean of precision and recall, might be more appropriate. In this paper we present a method for MLC which is able to optimise appropriate surrogates for a variety of performance measures. This means that the objective function being optimised by the method is tailored to the performance measure on which we want to do well in our specific application. This is in contrast particularly with probabilistic approaches, which typically aim for maximisation of likelihood scores rather than the performance measure used to assess the quality of the results. In addition, the method is based on well-understood facts from the domain of structured output learning, which gives us theoretical guarantees regarding the accuracy of the results obtained. Finally, source code is made available by us. An interesting aspect of the method is that we are only able to optimise the desired performance measures because we formulate the prediction problem in a reverse manner, in the spirit of [8]. We pose the prediction problem as predicting sets of instances given the labels. When this insight is fit into max-margin structured output methods, we obtain surrogate losses for the most widely used performance measures for multi-label classification. We perform experiments against state-of-theart methods in five publicly available benchmark datasets for MLC, and the proposed approach is the best performing overall. 1.1 Related Work The literature in this topic is vast and we cannot possibly make justice here since a comprehensive review is clearly impractical. Instead, we focus particularly on some state-of-the-art approaches 1 that have been tested on publicly available benchmark datasets for MLC, which facilitates a fair comparison against our method. A straightforward way to deal with multiple labels is to solve a binary classification problem for each one of them, treating them independently. This approach is known as Binary Method (BM) [9]. Classifier Chains (CC) [4] extends that by building a chain of binary classifiers, one for each possible label, but with each classifier augmented by all prior relevance predictions. Since the order of the classifiers in the chain is arbitrary, the authors also propose an ensemble method ? Ensemble of Classifier Chains (ECC) ? where several random chains are combined with a voting scheme. Probabilistic Classifier Chains (PCC) [1] extends CC to the probabilistic setting, with EPCC [1] being its corresponding ensemble method. Another way of working with multiple labels is to consider each possible set of labels as a class, thus encoding the problem as single-label classification. The problem with that is the exponentially large number of classes. RAndom K-labELsets (RAKEL) [10] deals with that by proposing an ensemble of classifiers, each one taking a small random subset of the labels and learning a single-label classifier for the prediction of each element in the power set of this subset. Other proposed ensemble methods are Ensemble of Binary Method (EBM) [4], which applies a simple voting scheme to a set of BM classifiers, and Ensemble of Pruned Sets (EPS) [11], which combines a set of Pruned Sets (PS) classifiers. PS is essentially a problem transformation method that maps sets of labels to single labels while pruning away infrequently occurring sets.Canonical Correlation Analysis (CCA) [3] exploits label relatedness by using a probabilistic interpretation of CCA as a dimensionality reduction technique and applying it to learn useful predictive features for multi-label learning. Meta Stacking (MS) [12] also exploits label relatedness by combining text features and features indicating relationships between classes in a discriminative framework. Two papers closely related to ours from the methodological point of view, which are however not tailored particularly to the multi-label learning problem, are [13] and [14]. In [13] the author proposes a smooth but non-concave relaxation of the F -measure for binary classification problems using a logistic regression classifier, and optimisation is performed by taking the maximum across several runs of BFGS starting from random initial values. In [14] the author proposes a method for optimising multivariate performance measures in a general setting in which the loss function is not assumed to be additive in the instances nor in the labels. The method also consists of optimising a convex relaxation of the derived losses. The key difference of our method is that we have a specialised convex relaxation for the case in which the loss does not decompose over the instances, but does decompose over the labels. 2 The Model Let the input x ? X denote a label (e.g., a tag of an image), and the output y ? Y denote a set of instances, (e.g., a set of training images). Let N = |X| be the number ! of labels and V be the number of instances. An input label x is encoded as x ? {0, 1}N , s.t. i xi = 1. For example if N = 5 the second label is denoted as x = [0 1 0 0 0]. An output instance y is encoded as y ? {0, 1}V (Y := {0, 1}V ), and yin = 1 iff instance xn was annotated with label i. For example if V = 10 and only instances 1 and 3 are annotated with label 2, then the y corresponding to x = [0 1 0 0 0] is y = [1 0 1 0 0 0 0 0 0 0]. We assume a given training set {(xn , y n )}N n=1 , where n N n N {xn }N n=1 comprises the entirety of labels available ({x }n=1 = X), and {y }n=1 represents the sets of instances associated to those labels. The task consists of estimating a map f : X ? Y which reproduces well the outputs of the training set (i.e., f (xn ) ? y n ) but also generalises well to new test instances. 2.1 Loss Functions The reason for this reverse prediction is the following: most widely accepted performance measures target information retrieval (IR) applications ? that is, given a label we want to find a set of relevant instances. As a consequence, the measures are averaged over the set of possible labels. This is the case for, in particular, Macro-precision, Macro-recall, Macro-F? 1 and Hamming loss [10]: N N 1 " 1 " Macro-precision = p(y n , y?n ), Macro-recall = r(y n , y?n ) N n=1 N n=1 1 Macro-F1 is the particular case of this when ? equals to 1. Macro-precision and macro-recall are particular cases of macro-F? for ? ? 0 and ? ? ?, respectively. 2 Macro-F? = where N 1 " p(y n , y?n )r(y n , y?n ) (1 + ? 2 ) 2 n n , N n=1 ? p(y , y? ) + r(y n , y?n ) h(y, y?) = y T 1 + y?T 1 ? 2y T y? , V Hamming loss = p(y, y?) = y T y? , y?T y? N 1 " h(y n , y?n ), N n=1 r(y, y?) = y T y? . yT y Here, y?n is our prediction for input label n, and y n the corresponding ground-truth. Since these measures average over the labels, in order to optimise them we need to average over the labels as well, and this happens naturally in a setting in which the empirical risk is additive on the labels.2 Instead of maximising a performance measure we frame the problem as minimising a loss function associated to the performance measure. We assume a known loss function ? : Y ? Y ? R+ which assigns a non-negative number to every possible pair of outputs. This loss function represents how much we want to penalise a prediction y? when the correct prediction is y, i.e., it has the opposite semantics of a performance measure. As already mentioned, we will be able to deal with a variety of loss functions in this framework, but for concreteness of exposition we will focus on a loss derived from the Macro-F? score defined above, whose particular case for ? equal to 1 (F1 ) is arguably the most popular performance measure for multi-label classification. In our notation, the F? score of a given prediction is F? (y, y?) = (1 + ? 2 ) y T y? , + y?T y? ? 2 yT y (1) and since F? is a score of alignment between y and y?, one possible choice for the loss is ?(y, y?) = 1 ? F? (y, y?), which is the one we focus on in this paper, ?(y, y?) = 1 ? (1 + ? 2 ) 2.2 y T y? . + y?T y? ? 2 yT y (2) Features and Parameterization Our next assumption is that the prediction for a given input x returns the maximiser(s) of a linear score of the model parameter vector ?, i.e., a prediction is given by y? such that 3 y? ? argmax &?(x, y), ?' . (3) y?Y Here we assume that ?(x, y) is linearly composed of features of the instances encoded in each yv , !V i.e., ?(x, y) = v=1 yv (?v ? x). The vector ?v is the feature representation for the instance v. The map ?(x, y) will be the zero vector whenever yv = 0, i.e., when instance v does not have label x. The feature map ?(x, y) has a total of DN dimensions, where D is the dimensionality of our instance features (?v ) and N is the number of labels. Therefore DN is the dimensionality of our parameter ? to be learned. 2.3 Optimisation Problem We are now ready to formulate our estimator. We assume an initial, ?ideal? estimator taking the form #$ % & N 1 " ? 2 ? n n n ? = argmin ?(? y (x ; ?), y ) + )?) . (4) N n=1 2 ? In other words, we want to find a model that minimises the average prediction loss in the training set plus a quadratic regulariser that penalises complex solutions (the parameter ? determines the tradeoff between data fitting and good generalisation). Estimators of this type are known as regularised risk minimisers [15]. 2 The Hamming loss also averages over the instances so it can be optimised in the ?normal? (not reverse) direction as well. 3 #A, B$ denotes the inner product of the vectorized versions of A and B 3 3 3.1 Optimisation Convex Relaxation The optimisation problem (4) is non-convex. Even more critical, the loss is a piecewise constant function of ?.4 A similar problem occurs when one aims at optimising a 0/1 loss in binary classification; in that case, a typical workaround consists of minimising a surrogate convex loss function which upper bounds the 0/1 loss, for example the hinge loss, what gives rise to the support vector machine. Here we use an analogous approach, notably popularised in [16], which optimises a convex upper bound on the structured loss of (4). The resulting optimisation problem is # & N " 1 ? 2 [?? , ? ? ] = argmin ?n + )?) (5) N n=1 2 ?,? s.t. &?(xn , y n ), ?' ? &?(xn , y), ?' ? ?(y, y n ) ? ?n , ?n, y ? Y. ?n ? 0 (6) It is easy to see that ?n? upper bounds ?(? y?n , y n ) (and therefore the objective in (5) upper bounds that of (4) for the optimal solution). Here, y??n := argmaxy &?(xn , y), ?? '. First note that since the constraints (6) hold for all y, they also hold for y??n . Second, the left hand side of the inequality for y = y?n must be non-positive from the definition of y? in equation (3). It then follows that ?n? ? ?(? y?n , y n ). The constraints (6) basically enforce a loss-sensitive margin: ? is learned so that mispredictions y that incur some loss end up with a score &?(xn , y), ?' that is smaller than the score &?(xn , y n ), ?' of the correct prediction y n by a margin equal to that loss (minus slack ?). The formulation is a generalisation of support vector machines for the case in which there are an exponential number of classes y. It is in this sense that our approach is somewhat related in spirit to [10], as mentioned in the Introduction. However, as described below, here we can use a method for selecting a polynomial number of constraints which provably approximates well the original problem. The optimisation problem (5) has n|Y| = n2V constraints. Naturally, this number is too large to allow for a practical solution of the quadratic program. Here we resort to a constraint generation strategy, which consists of starting with no constraints and iteratively adding the most violated constraint for the current solution of the optimisation problem. Such an approach is assured to find an '-close approximation of the solution of (5) after including only O('?2 ) constraints [16]. The key problem that needs to be solved at each iteration is constraint generation, i.e., to find the maximiser of the violation margin ?n , yn? ? argmax [?(y, y n ) + &?(xn , y), ?'] . (7) y?Y The difficulty in solving the above optimisation problem depends on the choice of ?(x, y) and ?. Next we investigate how this problem can be solved for our particular choices of these quantities. 3.2 Constraint generation !V Using eq.(2) and ?(x, y) = v=1 yv (?v ? x), eq. (7) becomes yn? ? argmax &y, zn ' . (8) y?Y where zn = ??n ? and (1 + ? 2 )y n 2 )y) + ? 2 )y n ) ? ? is a V ? D matrix with row v corresponding to ?v ; ? ?n is the nth column of matrix ?; 2, (9) 4 There is a countable number of loss values but an uncountable number of parameters, so there are large equivalence classes of parameters that correspond to precisely the same loss. 4 Algorithm 1 Reverse Multi-Label Learning 1: Input: training set {(xn , y n )}N n=1 , ?, ?, Output: ? 2: Initialize i = 1, ?1 = 0, MAX= ?? 3: repeat 4: for n = 1 to N do 5: Compute yn? (Na??ve: Algorithm 2. Improved: See Appendix) 6: end for 7: Compute gradient gi (equation (12)) and objective oi (equation (11)) 2 8: ?i+1 := argmin? ?2 )?) + max(0, max &gj , ?' + oj ); i ? i + 1 j?i 9: until converged (see [18]) 10: return ? Algorithm 2 Na??ve Constraint Generation ? 1: Input: (xn , y n ), ?, ?, ?, V , Output: yn 2: MAX= ?? 3: for k = 1 to V do 2 n (1+? )y 4: zn = ??n ? k+? 2 %y n %2 ? 5: y = argmaxy?Yk &y, zn ' (i.e. find top k entries in zn in O(V ) time) 6: CURRENT= maxy?Yk &y, zn ' 7: if CURRENT>MAX then 8: MAX = CURRENT 9: yn? = y ? 10: end if 11: end for ? 12: return yn We now investigate how to solve (8) for a fixed ?. For the purpose of clarity, here we describe a simple, na??ve algorithm. In the appendix we present a more involved but much faster algorithm. A simple algorithm can be obtained by first noticing that zn depends on y only through the number of its nonzero elements. Consider the set of all y with precisely k nonzero elements, i.e., Yk =: 2 {y : )y) = k}. Then the objective in (8), if the maximisation is instead restricted to the domain Yk , is effectively linear in y, since zn in this case is a constant w.r.t. y. Therefore we can solve separately for each Yk by finding the top k entries in zn . Finding the top k elements of a list of size V can be done in O(V ) time [17]. Therefore we have a O(V 2 ) algorithm (for every k from 1 to V , solve argmaxy?Yk &y, z' in O(V ) expected time). Algorithm 1 describes in detail the optimisation, as solved by BMRM [18], and Algorithm 2 shows the na??ve constraint generation routine. The BMRM solver requires both the value of the objective function for the slack corresponding to the most violated constraint and its gradient. The value of the slack variable corresponding to yn? is ?n? = ?(yn? , y n ) + &?(xn , yn? ), ?' ? &?(xn , y n ), ?' , (10) thus the objective function from (5) becomes ? 1 " 2 ?(yn? , y n ) + &?(xn , yn? ), ?' ? &?(xn , y n ), ?' + )?) , (11) N n 2 whose gradient (with respect to ?) is ?? ? 1 " (?(xn , y n ) ? ?(xn , yn? )). N n (12) We need both expressions (11) and (12) in Algorithm 1. 3.3 Prediction at Test Time The problem to be solved at test time (eq. (3)) has the same form as the problem of constraint generation (eq. (7)), the only difference being that zn = ??n (i.e., the second term in eq. (9), due to the loss, is not present). Since zn a constant vector, the solution yn? for (7) is the vector that indicates the positive entries of zn , which can be efficiently found in O(V ). Therefore inference at prediction time is very fast. 5 Table 1: Evaluation scores and corresponding losses score ?(y, y?) 2 )(y T y?) macro-F? 1 ? (1+? ? 2 y T y+? y T y? macro-precision 1? y T y? y?T y? macro-recall 1? y T y? yT y Hamming loss y T 1+? y T 1?2y T y? V 3.4 Table 2: Datasets. #train/#test denotes the number of observations used for training and testing respectively; N is the number of labels and D the dimensionality of the features. dataset domain #train #test N D yeast biology 1500 917 14 103 scene image 1211 1196 6 294 medical text 645 333 45 1449 enron text 1123 579 53 1001 emotions music 391 202 6 72 Other scores Up to now we have focused on optimising Macro-F? , which already gives us several scores, in particular Macro-F1 , macro-recall and macro-precision. We can however optimise other scores, in particular the popular Hamming loss ? Table 1 shows a list with the corresponding loss, which we then plug in eq.(4). Note that for Hamming loss and macro-recall the denominator is constant, and therefore it is not necessary to solve (8) multiple times as described earlier, which makes constraint generation as fast as test-time prediction (see subsection 3.3). 4 Experimental Results In this section we evaluate our method in several real world datasets, for both macro-F? and Hamming loss. These scores were chosen because macro-F? is a generalisation of the most relevant scores, and the Hamming loss is a generic, popular score in the multi-label classification literature. Datasets We used 5 publicly available5 multi-label datasets: yeast, scene, medical, enron and emotions. We selected these datasets because they cover a variety of application domains ? biology, image, text and music ? and there are published results of competing methods on them for some of the popular evaluation measures for MLC (macro-F1 and Hamming loss). Table 2 describes them in more detail. Model selection Our model requires only one parameter: ?, the trade-off between data fitting and good generalisation. For each experiment we selected it with 5-fold cross-validation using only the training data. Implementation Our implementation is in C++, using the Bundle Methods for Risk Minimization (BMRM) of [18] as a base. Source code is available6 under the Mozilla Public License.7 Comparison to published results on Macro-F1 In our first set of experiments we compared our model to published results on the Macro-F1 score. We strived to make our comparison as broad as possible, but we limited ourselves to methods with published results on public datasets, where the experimental setting was described in enough detail to allow us to make a fair comparison. We therefore compared our model to Canonical Correlation Analysis [3] (CCA), Binary Method [9] (BM), Classifier Chains [4] (CC), Subset Mapping [19] (SM), Meta Stacking [12] (MS), Ensembles of Binary Method [4] (EBM) , Ensembles of Classifier Chains [4] (ECC), Ensembles of Pruned Sets [11] (EPS) and Random K Label Subsets [10] (RAKEL). Table 3 summarizes our results, along with competing methods? which were taken from compilations by [3] and [4]. We can see that our model has the best performance in yeast, medical and enron. In 5 http://mulan.sourceforge.net/datasets.html http://users.cecs.anu.edu.au/?jpetterson/. 7 http://www.mozilla.org/MPL/MPL-1.1.html 6 6 scene it doesn?t perform as well ? we suspect this is related to the label cardinality of this dataset: almost all instances have just one label, making this essentially equivalent to a multiclass dataset. Comparison to published results on Hamming Loss To illustrate the flexibility of our model we also evaluated it on the Hamming loss. Here, we compared our model to classifier chains [4] (CC), probabilistic classifier chains [1] (PCC), ensembles of classifier chains [4] (ECC) and ensembled probabilistic classifier chains [1] (EPCC). These are the methods for which we could find Hamming loss results on publicly available data. Table 4 summarizes our results, along with competing methods? which were taken from a compilation by [1]. As can be seen, our model has the best performance on both datasets. Results on F? One strength of our method is that it can be optimised for the specific measure we are interested in. In Macro-F? , for example, ? is a trade-off between precision and recall: when ? ? 0 we recover precision, and when ? ? ? we get recall. Unlike with other methods, given a desired precision/recall trade-off encoded in a choice of ?, we can optimise our model such that it gets the best performance on Macro-F? . To show this we ran our method on all five datasets, but this time with different choices of ?, ranging from 10?2 to 102 . In this case, however, we could not find published results to compare to, so we used Mulan8 , an open-source library for learning from multi-label datasets, to train three models: BM[9], RAKEL[10] and MLKNN[20]. BM was chosen as a simple baseline, and RAKEL and MLKNN are the two state-of-the-art methods available in the package. MLKNN has two parameters: the number of neighbors k and a smoothing parameter s controlling the strength of the uniform prior. We kept both fixed to 10 and 1.0, respectively, as was done in [20]. RAKEL has three parameters: the number of models m, the size of the labelset k and the threshold t. Since a complete search over the parameter space would be impractical, we adopted the library?s default for t and m (respectively 0.5 and 2 ? N ) and set k to N2 as suggested by [4]. For BM we kept the library?s defaults. In Figure 1 we plot the results. We can see that BM tends to prioritize recall (right side of the plot), while ML-KNN and RAKEL give more emphasis to precision (left side). Our method, however, goes well in both sides, as it is trained separately for each value of ?. In both scene and yeast it dominates the right side while is still competitive on the left side. And in the other three datasets ? medical, enron and emotions ? it practically dominates over the entire range of ?. 5 Conclusion and Future Work We presented a new approach to multi-label learning which consists of predicting sets of instances from the labels. This apparent unintuitive approach is in fact natural since, once the problem is viewed from this perspective, many popular performance measures admit convex relaxations that can be directly and efficiently optimised with existing methods. The method only requires one parameter, as opposed to most existing methods, which have several. The method leverages on existing tools from structured output learning, which gives us certain theoretical guarantees. A simple version of constraint generation is presented for small problems, but we also developed a scalable, fast version for dealing with large datasets. We presented a detailed experimental comparison against several state-of-the-art methods and overall our performance is notably superior. A fundamental limitation of our current approach is that it does not handle dependencies among labels. It is however possible to include such dependencies by assuming for example a bivariate feature map on the labels, rather than univariate. This however complicates the algorithmics, and is left as subject for future research. Acknowledgements We thank Miro Dud??k as well as the anonymous reviewers for insightful observations that helped to improve the paper. NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. 8 http://mulan.sourceforge.net/ 7 Table 3: Macro-F1 results. Bold face indicates the best performance. We don?t have results for CCA in the Medical and Enron datasets. Dataset Ours CCA CC BM SM MS ECC EBM EPS RAKEL Yeast 0.440 0.346 0.346 0.326 0.327 0.331 0.362 0.364 0.420 0.413 Scene 0.671 0.374 0.696 0.685 0.666 0.694 0.742 0.729 0.763 0.750 Medical 0.420 0.377 0.364 0.321 0.370 0.386 0.382 0.324 0.377 Enron 0.243 0.198 0.197 0.144 0.198 0.201 0.201 0.155 0.206 Table 4: Hamming loss results. Bold face indicates the best performance. Dataset Scene Emotions Ours 0.1271 0.2252 CC 0.1780 0.2448 PCC 0.1780 0.2417 ECC 0.1503 0.2428 EPCC 0.1498 0.2372 scene yeast 1 1 0.9 0.9 ML?KNN 0.8 RaKEL 0.8 BM 0.7 ? 0.7 macro?F macro?F? Our method 0.6 0.6 0.5 ML?KNN 0.5 0.4 RaKEL BM 0.4 ?2 0.3 ?1.5 ?1 ?0.5 0 log(?) 0.5 1 1.5 Our method 0.2 ?2 2 ?1.5 ?1 ?0.5 medical 1 1.5 2 0.5 1 1.5 2 enron 0.9 0.8 ML?KNN 0.7 ML?KNN 0.8 RaKEL RaKEL 0.7 BM 0.6 ? Our method 0.5 macro?F macro?F? 0.5 1 0.9 0.4 0.3 BM Our method 0.6 0.5 0.4 0.3 0.2 0.2 0.1 0 ?2 0 log(?) 0.1 ?1.5 ?1 ?0.5 0 log(?) 0.5 1 1.5 0 ?2 2 ?1.5 ?1 ?0.5 0 log(?) emotions 1 0.9 ML?KNN RaKEL 0.8 BM macro?F? Our method 0.7 0.6 0.5 0.4 ?2 ?1.5 ?1 ?0.5 0 log(?) 0.5 1 1.5 2 Figure 1: Macro-F? results on five datasets, with ? ranging from 10?2 to 102 (i.e., log10 ? ranging from -2 to 2). The center point (log ? = 0) corresponds to macro-F1 . ? controls a trade-off between Macro-precision (left side) and Macro-recall (right side). 8 References [1] Krzysztof Dembczynski, Weiwei Cheng, and Eyke H?ullermeier. Bayes Optimal Multilabel Classification via Probabilistic Classifier Chains. In Proc. Intl. Conf. Machine Learning, 2010. [2] Xinhua Zhang, T. Graepel, and Ralf Herbrich. Bayesian Online Learning for Multi-label and Multi-variate Performance Measures. In Proc. Intl. Conf. on Artificial Intelligence and Statistics, volume 9, pages 956?963, 2010. [3] Piyush Rai and Hal Daume. Multi-Label Prediction via Sparse Infinite CCA. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1518?1526. 2009. [4] Jesse Read, Bernhard Pfahringer, Geoffrey Holmes, and Eibe Frank. Classifier chains for multi-label classification. In Wray L. Buntine, Marko Grobelnik, Dunja Mladenic, and John Shawe-Taylor, editors, ECML/PKDD (2), volume 5782 of Lecture Notes in Computer Science, pages 254?269. Springer, 2009. [5] Andr?e Elisseeff and Jason Weston. A kernel method for multi-labelled classification. In Annual ACM Conference on Research and Development in Information Retrieval, pages 274?281, 2005. [6] Matthieu Guillaumin, Thomas Mensink, Jakob Verbeek, and Cordelia Schmid. TagProp: Discriminative Metric Learning in Nearest Neighbor Models for Image Auto-Annotation. In Proc. Intl. Conf. Computer Vision, 2009. [7] Douglas W. Oard and Jason R. Baron. Overview of the TREC 2008 Legal Track. [8] Linli Xu, Martha White, and Dale Schuurmans. Optimal reverse prediction. Proc. Intl. Conf. Machine Learning, pages 1?8, 2009. [9] Grigorios Tsoumakas, Ioannis Katakis, and Ioannis P. Vlahavas. Mining Multi-label Data. Springer, 2009. [10] Grigorios Tsoumakas and Ioannis P. Vlahavas. Random k-labelsets: An ensemble method for multilabel classification. In Proceedings of the 18th European Conference on Machine Learning (ECML 2007), pages 406?417, Warsaw, Poland, 2007. [11] Jesse Read, Bernhard Pfahringer, and Geoff Holmes. Multi-label classification using ensembles of pruned sets. In ICDM ?08: Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, pages 995?1000, Washington, DC, USA, 2008. IEEE Computer Society. [12] Shantanu Godbole and Sunita Sarawagi. Discriminative methods for multi-labeled classification. In Proceedings of the 8th Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 22?30. Springer, 2004. [13] Martin Jansche. Maximum expected F-measure training of logistic regression models. HLT, pages 692?699, 2005. [14] T. Joachims. A support vector method for multivariate performance measures. In Proc. Intl. Conf. Machine Learning, pages 377?384, San Francisco, California, 2005. Morgan Kaufmann Publishers. [15] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998. [16] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res., 6:1453?1484, 2005. [17] D. E. Knuth. The Art of Computer Programming: Fundamental Algorithms, volume 1. Addison-Wesley, Reading, Massachusetts, second edition, 1998. [18] Choon Hui Teo, S. V. N. Vishwanathan, Alex J. Smola, and Quoc V. Le. Bundle methods for regularized risk minimization. Journal of Machine Learning Research, 11:311?365, 2010. [19] Robert E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297?336, 1999. [20] Min-Ling Zhang and Zhi-Hua Zhou. ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognition, 40(7):2038?2048, July 2007. 9
3920 |@word version:3 polynomial:1 pcc:3 justice:1 open:1 elisseeff:1 minus:1 reduction:1 initial:2 score:17 selecting:1 document:4 ours:3 existing:3 current:5 com:2 must:1 john:2 additive:2 hofmann:1 treating:1 plot:2 intelligence:1 selected:2 parameterization:1 boosting:1 penalises:1 herbrich:1 org:1 zhang:2 five:3 dn:2 popularised:1 along:2 consists:5 combine:1 fitting:2 shantanu:1 manner:1 excellence:1 notably:2 expected:2 pkdd:1 nor:1 multi:23 zhi:1 solver:1 increasing:1 becomes:2 mulan:2 estimating:1 notation:1 cardinality:1 katakis:1 what:1 argmin:3 developed:1 proposing:1 finding:2 transformation:1 impractical:2 guarantee:2 every:2 penalise:1 act:2 voting:2 concave:1 classifier:19 labelset:1 control:1 medical:7 yn:13 arguably:1 positive:2 ecc:5 understood:1 tends:1 consequence:1 encoding:1 cecs:1 mach:1 optimised:5 might:1 plus:1 emphasis:1 au:3 equivalence:1 limited:1 range:1 averaged:1 practical:1 testing:1 maximisation:2 sarawagi:1 empirical:1 word:1 confidence:1 altun:1 get:2 cannot:1 close:1 selection:1 tsochantaridis:1 risk:4 applying:1 www:1 equivalent:1 map:5 reviewer:1 yt:4 center:1 jesse:2 straightforward:1 attention:1 starting:2 independently:1 convex:7 mispredictions:1 formulate:2 focused:1 go:1 williams:1 assigns:1 matthieu:1 insight:1 estimator:3 holmes:2 ralf:1 handle:1 analogous:1 target:1 controlling:1 user:1 programming:1 regularised:1 element:4 infrequently:1 recognition:1 particularly:3 labeled:1 solved:4 culotta:1 caetano:2 trade:4 yk:6 mentioned:2 ran:1 workaround:1 epcc:3 rakel:12 xinhua:1 multilabel:2 trained:1 solving:1 predictive:1 incur:1 geoff:1 represented:1 train:3 fast:3 describe:1 artificial:1 grigorios:2 whose:2 encoded:4 widely:2 solve:5 apparent:1 drawing:1 statistic:1 gi:1 knn:7 online:1 net:2 propose:1 product:1 macro:36 relevant:4 combining:1 iff:1 flexibility:1 sourceforge:2 p:2 assessing:1 intl:5 piyush:1 illustrate:1 pose:1 minimises:1 nearest:1 eq:6 entirety:1 australian:4 direction:1 closely:1 annotated:2 correct:2 australia:2 viewing:1 public:2 tsoumakas:2 government:1 f1:9 ensembled:1 tiberio:2 decompose:2 anonymous:1 hold:2 practically:1 ground:1 normal:1 warsaw:1 mapping:1 predict:1 purpose:1 proc:5 label:58 sensitive:1 council:1 teo:1 tool:1 reflects:1 minimization:2 clearly:1 aim:2 rather:2 zhou:1 derived:2 focus:3 joachim:2 methodological:1 likelihood:1 indicates:3 contrast:1 baseline:1 sense:1 inference:1 economy:1 typically:1 entire:1 pfahringer:2 interested:1 semantics:1 provably:1 overall:2 classification:18 html:2 among:1 stateof:1 denoted:1 proposes:2 development:1 smoothing:1 art:5 initialize:1 equal:3 optimises:1 once:1 emotion:5 cordelia:1 washington:1 biology:3 optimising:4 represents:2 broad:1 theart:1 future:2 ullermeier:1 piecewise:1 sunita:1 composed:1 simultaneously:1 petterson:2 national:2 comprehensive:1 ve:4 choon:1 argmax:3 ourselves:1 investigate:2 mining:3 evaluation:3 mladenic:1 alignment:1 violation:1 argmaxy:3 compilation:2 bundle:2 chain:14 necessary:1 taylor:1 desired:2 re:1 theoretical:2 complicates:1 instance:21 column:1 earlier:1 cover:2 zn:12 stacking:2 subset:4 entry:3 mozilla:2 uniform:1 too:1 buntine:1 dependency:2 combined:1 fundamental:2 international:1 probabilistic:7 off:4 na:4 opposed:1 possibly:1 prioritize:1 admit:2 conf:5 resort:1 return:3 bfgs:1 bold:2 ioannis:3 depends:2 performed:1 view:1 helped:1 jason:2 yv:4 recover:1 competitive:1 dembczynski:1 bayes:1 annotation:3 ass:2 oi:1 publicly:4 baron:1 accuracy:1 ir:1 kaufmann:1 efficiently:3 ensemble:13 correspond:1 bayesian:1 wray:1 basically:1 cc:6 published:6 converged:1 whenever:1 hlt:1 guillaumin:1 definition:1 against:3 involved:1 james:2 naturally:2 associated:3 hamming:13 dataset:5 popular:6 massachusetts:1 recall:13 subsection:1 knowledge:1 dimensionality:4 graepel:1 routine:1 wesley:1 asia:1 improved:2 formulation:2 done:2 evaluated:1 mensink:1 just:1 smola:1 correlation:2 until:1 hand:2 working:1 web:1 logistic:2 quality:2 yeast:6 hal:1 building:1 usa:1 dud:1 read:2 iteratively:1 nonzero:2 deal:3 eyke:1 white:1 marko:1 m:3 mpl:2 minimisers:1 complete:1 image:8 harmonic:1 ranging:3 recently:1 common:1 superior:1 functional:1 overview:1 exponentially:1 volume:3 belong:1 interpretation:1 approximates:1 eps:3 centre:1 shawe:1 funded:1 bmrm:3 gj:1 base:1 multivariate:2 perspective:2 retrieved:1 reverse:7 mandatory:1 certain:1 meta:2 binary:8 inequality:1 seen:1 morgan:1 somewhat:1 july:1 multiple:6 smooth:1 generalises:1 faster:1 plug:1 minimising:2 cross:1 retrieval:2 icdm:1 prediction:21 scalable:1 regression:2 verbeek:1 denominator:1 vision:1 essentially:2 optimisation:9 metric:1 iteration:1 kernel:1 tailored:2 labelsets:2 strived:1 addition:1 want:4 separately:2 source:3 publisher:1 unlike:2 enron:7 subject:2 suspect:1 facilitates:1 lafferty:1 spirit:2 leverage:1 ideal:1 bengio:1 easy:1 enough:1 weiwei:1 variety:3 fit:1 variate:1 competing:3 opposite:1 inner:1 regarding:1 mlknn:3 tradeoff:1 multiclass:1 expression:1 york:1 linli:1 useful:1 detailed:1 http:4 schapire:1 canonical:2 andr:1 track:1 diverse:1 key:2 threshold:1 license:1 clarity:1 douglas:1 kept:2 krzysztof:1 vast:1 relaxation:7 concreteness:1 realworld:1 run:1 noticing:1 package:1 extends:2 almost:1 appendix:2 summarizes:2 maximiser:2 cca:6 bound:4 cheng:1 fold:1 quadratic:2 annual:1 strength:2 constraint:16 precisely:2 vishwanathan:1 alex:1 scene:7 tag:2 aspect:1 min:1 pruned:4 performing:1 martin:1 structured:5 department:1 rai:1 pacific:1 across:1 smaller:1 describes:2 son:1 making:1 happens:1 quoc:1 maxy:1 restricted:1 taken:2 legal:1 equation:3 slack:3 singer:1 addison:1 end:4 adopted:1 available:6 away:1 appropriate:2 enforce:1 generic:1 vlahavas:2 original:1 thomas:1 denotes:2 uncountable:1 top:3 include:1 hinge:1 log10:1 music:2 exploit:2 society:1 objective:6 already:2 quantity:1 occurs:1 strategy:1 surrogate:3 gradient:3 thank:1 topic:1 reason:1 nicta:5 maximising:1 assuming:1 code:2 relationship:1 robert:1 potentially:1 frank:1 tagprop:1 negative:1 rise:1 unintuitive:1 implementation:2 regulariser:1 understandable:1 countable:1 perform:2 upper:4 observation:2 datasets:16 sm:2 benchmark:2 ecml:2 situation:1 communication:1 frame:1 trec:1 dc:1 jakob:1 mlc:6 arbitrary:1 community:1 pair:1 california:1 learned:2 algorithmics:1 able:3 suggested:1 below:1 pattern:1 eighth:1 reading:1 program:2 optimise:6 max:7 ebm:3 including:1 oj:1 power:1 critical:1 difficulty:1 natural:1 regularized:1 predicting:3 nth:1 scheme:2 improve:1 rated:1 library:3 ready:1 auto:1 schmid:1 text:4 review:1 literature:2 discovery:2 prior:2 acknowledgement:1 ict:1 poland:1 interdependent:1 loss:39 lecture:1 interesting:1 generation:8 limitation:1 geoffrey:1 validation:1 digital:1 vectorized:1 editor:2 row:1 repeat:1 side:8 allow:2 neighbor:2 taking:3 face:2 jansche:1 sparse:1 dimension:1 xn:18 world:1 default:2 doesn:1 dale:1 author:3 made:1 san:1 bm:13 pruning:1 relatedness:2 bernhard:2 gene:2 dealing:1 ml:7 reproduces:1 assumed:1 francisco:1 discriminative:3 xi:1 don:1 search:2 table:8 learn:2 schuurmans:2 excellent:1 complex:1 european:1 domain:4 assured:1 linearly:1 ling:1 daume:1 n2:1 edition:1 fair:2 xu:1 augmented:1 canberra:2 broadband:1 wiley:1 precision:12 dunja:1 comprises:1 exponential:1 specific:2 showing:1 insightful:1 list:2 dominates:2 bivariate:1 eibe:1 vapnik:1 adding:1 effectively:1 hui:1 knuth:1 occurring:1 anu:1 margin:5 specialised:1 yin:1 univariate:1 lazy:1 applies:1 springer:3 hua:1 corresponds:1 truth:1 determines:1 acm:1 weston:1 goal:1 viewed:1 exposition:1 labelled:1 martha:1 generalisation:4 typical:1 infinite:1 total:1 accepted:1 experimental:3 indicating:1 support:3 relevance:1 violated:2 evaluate:1 tested:1
3,225
3,921
More data means less inference: A pseudo-max approach to structured learning David Sontag Microsoft Research Ofer Meshi Hebrew University Tommi Jaakkola CSAIL, MIT Amir Globerson Hebrew University Abstract The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference are intractable. Here we show that it is possible to circumvent this difficulty when the distribution of training examples is rich enough, via a method similar in spirit to pseudo-likelihood. We show that our new method achieves consistency, and illustrate empirically that it indeed approaches the performance of exact methods when sufficiently large training sets are used. Many prediction problems in machine learning applications are structured prediction tasks. For example, in protein folding we are given a protein sequence and the goal is to predict the protein?s native structure [14]. In parsing for natural language processing, we are given a sentence and the goal is to predict the most likely parse tree [2]. In these and many other applications, we can formalize the structured prediction problem as taking an input x (e.g., primary sequence, sentence) and predicting ? ), where ?(x, y) is a function that y (e.g., structure, parse) according to y = arg maxy? ?Y ? ? ?(x, y maps any input and a candidate assignment to a feature vector, Y denotes the space of all possible assignments to the vector y, and ? is a weight vector to be learned. This paper addresses the problem of learning structured prediction models from data. In particular, given a set of labeled examples {(xm , y m )}M m=1 , our goal is to find a vector ? such that for each example m, y m = arg maxy?Y ? ? ?(xm , y), i.e. one which separates the training data. For many structured prediction models, maximization over Y is computationally intractable. This makes it difficult to apply previous algorithms for learning structured prediction models, such as structured perceptron [2], stochastic subgradient [10], and cutting-plane algorithms [5], which require making a prediction at every iteration (equivalent to repeatedly solving an integer linear program). Given training data, we can consider the space of parameters ? that separate the data. This space can be defined by the intersection of a large number of linear inequalities. A recent approach to getting around the hardness of prediction is to use linear programming (LP) relaxations to approximate the maximization over Y [4, 6, 9]. However, separation with respect to a relaxation places stronger constraints on the parameters. The target solution, an integral vertex in the LP, must now distinguish itself also from possible fractional vertexes that arise due to the relaxation. The relaxations can therefore be understood as optimizing over an inner bound of ?. This set may be empty even if the training data is separable with exact inference [6]. Another obstacle to using LP relaxations for learning is that solving the LPs can be very slow. In this paper we ask whether it is possible to learn while avoiding inference altogether. We propose a new learning algorithm, inspired by pseudo-likelihood [1], that optimizes over an outer bound of ?. Learning involves optimizing over only a small number of constraints per data point, and thus can be performed quickly, even for complex structured prediction models. We show that, if the data available for learning is ?nice?, this algorithm is consistent, i.e. it will find some ? ? ?. This is an example of how having the right data can circumvent the hardness of learning for structured prediction. 1 We also investigate the limitations of the proposed method. We show that the problem of even deciding whether a given data set is separable is NP-hard, and thus learning in a strict sense is no easier than prediction. Thus, we should not expect for our algorithm, or any other polynomial time algorithm, to always succeed at learning from an arbitrary finite data set. To our knowledge, this is the first result characterizing the hardness of exact learning for structured prediction. Finally, we show empirically that our algorithm allows us to successfully learn the parameters for both multi-label prediction and protein side-chain placement. The performance of the algorithm is improved as more data becomes available, as our theoretical results anticipate. 1 Pseudo-Max method We consider the general structured prediction problem. The input space is denoted by X and the set of all possible assignments by Y. Each y ? Y corresponds to n variables y1 , . . . , yn , each with k possible states. The classifier uses a (given) function ?(x, y) : X , Y ? Rd and (learned) weights ? ? Rd , and is defined as y(x; ?) = arg maxy? ?Y f (? y ; x, ?) where f is the discriminant function f (y; x, ?) = ? ? ?(x, y). Our analysis will focus on functions ? whose scope is limited to small sets of the yi variables, but for now we keep the discussion general. Given a set of labeled examples {(xm , y m )}M m=1 , the goal of the typical learning problem is to find weights ? that correctly classify the training examples. Consider first the separable case. Define the  set of separating weight vectors, ? = ? | ?m, y ? Y, f (y m ; xm , ?) ? f (y; xm , ?)+e(y, y m ) . e is a loss function (e.g., zero-one or Hamming) such that e(y m , y m ) = 0 and e(y, y m ) > 0 for y 6= y m , which serves to rule out the trivial solution ? = 0.1 The space ? is defined by exponentially many constraints per example, one for each competing assignment. In this work we consider a much simpler set of constraints where, for each example, we only consider the competing assignments obtained by modifying a single label yi , while fixing the other labels to their value at y m . The pseudo-max set, which is an outer bound on ?, is given by  m m ?ps = ? | ?m, i, yi , f (y m ; xm , ?) ? f (y m (1) ?i , yi ; x , ?) + e(yi , yi ) . m Here y m ?i denotes the label y without the assignment to yi . When the data is not Pseparable, ? will be the empty set. Instead, we may choose to minimize the hinge loss, `(?) = m maxy f (y; xm , ?) ? f (y m ; xm , ?) + e(y, y m ) , which can be shown to be an upper bound on the training error [13]. When the data is separable, min? `(?) = 0. Note that regularization may be added to this objective. The corresponding pseudo-max objective replaces the maximization over all of y with maximization over a single variable yi while fixing the other labels to their value at y m :2,3 `ps (?) = M X n X m=1 i=1   m m m m max f (y m ?i , yi ; x , ?) ? f (y ; x , ?) + e(yi , yi ) . yi (2) Analogous to before, we have min? `ps (?) = 0 if and only if ? ? ?ps . The objective in Eq. 2 is similar in spirit to pseudo-likelihood objectives used for maximum likelihood estimation of parameters of Markov random fields (MRFs) [1]. The pseudo-likelihood estimate is provably consistent when the data generating distribution is a MRF of the same structure as used in the pseudo-likelihood objective. However, our setting is different since we only get to view the maximizing assignment of the MRF rather than samples from it. Thus, a particular x will always be paired with the same y rather than samples y drawn from the conditional distribution p(y|x; ?). The pseudo-max constraints in Eq. 1 are also related to cutting plane approaches to inference [4, 5]. In the latter, the learning problem is solved by repeatedly looking for assignments that violate the separability constraint (or its hinge version). Our constraints can be viewed as using a very small 1 An alternative formulation, which we use in the next section, is to break the symmetry by having part of the input not be multiplied by any weight.PThis will also rule out the trivial solution ? = 0. 2 It is possible to use maxi instead of i , and some of our consistency results will still hold. 3 The pseudo-max approach is markedly different from a learning method which predicts each label yi independently, since the objective considers all i simultaneously (both at learning and test time). 2 x2 0.2 J ? + x1 = 0 y = (0, 1) y = (1, 1) g(J12) x2 = 0 x1 J ? + x1 + x2 = 0 y = (0, 0) c1=0 c1=1 c1=?1 0.15 0.1 J + x2 = 0 ? 0.05 y = (1, 0) x1 = 0 0 ?1 ?0.5 0 J 0.5 1 Figure 1: Illustrations for a model with two variables. Left: Partitioning of X induced by configurations y(x) for some J ? > 0. Blue lines carve out the exact regions. Red lines denote the pseudo-max constraints that hold with equality. Pseudo-max does not obtain the diagonal constraint coming from comparing configurations y = (1, 1) and (0, 0), since these differ by more than one coordinate. Right: One strictly-convex component of the `ps (J ) function (see Eq. 9). The function is shown for different values of c1 , the mean of the x1 variable. subset of assignments for the set of candidate constraint violators. We also note that when exact maximization over the discriminant function f (y; x, ?) is hard, the standard cutting plane algorithm cannot be employed since it is infeasible to find a violated constraint. For the pseudo-max objective, finding a constraint violation is simple and linear in the number of variables.4 It is easy to see (as will be elaborated on next) that the pseudo-max method does not in general yield a consistent estimate of ?, even in the separable case. However, as we show, consistency can be shown to be achieved under particular assumptions on the data generating distribution p(x). 2 Consistency of the Pseudo-Max method In this section we show that if the feature generating distribution p(x) satisfies particular assumptions, then the pseudo-max approach yields a consistent estimate. In other words, if the training ? data is of the form {(xm , y(xm ; ? ? ))}M m=1 for some true parameter vector ? , then as M ? ? the minimum of the pseudo-max objective will converge to ? ? (up to equivalence transformations). The section is organized as follows. First, we provide intuition for the consistency results by considering a model with only two variables. Then, in Sec. 2.1, we show that any parameter ? ? can be identified to within arbitrary accuracy by choosing a particular training set (i.e., choice of xm ). This in itself proves consistency, as long as there is a non-zero probability of sampling this set. In Sec. 2.2 we give a more direct proof of consistency by using strict convexity arguments. For ease of presentation, we shall work with a simplified instance of the structured learning setting. We focus on binary variables, yi ? {0, 1}, and consider discriminant functions corresponding to Ising models, a special case of pairwise MRFs (J denotes the vector of ?interaction? parameters): P P f (y; x, J ) = ij?E Jij yi yj + i yi xi (3) The singleton potential for variable yi is yi xi and is not dependent on the model parameters. We could have instead used Ji yi xi , which would be more standard. However, this would make the parameter vector J invariant to scaling, complicating the identifiability analysis. In the consistency analysis we will assume that the data is generated using a true parameter vector J ? . We will show that as the data size goes to infinity, minimization of `ps (J ) yields J ? . We begin with an illustrative analysis of the pseudo-max constraints for a model with only two variables, i.e. f (y; x, J) = Jy1 y2 + y1 x1 + y2 x2 . The purpose of the analysis is to demonstrate general principles for when pseudo-max constraints may succeed or fail. Assume that training samples are generated via y(x) = argmaxy f (y; x, J ? ). We can partition the input space X into four regions, ? } for each of the four configurations y ? , shown in Fig. 1 (left). The blue lines {x ? X : y(x) = y outline the exact decision boundaries of f (y; x, J ? ), with the lines being given by the constraints 4 The methods differ substantially in the non-separable setting where we minimize `ps (?), using a slack variable for every node and example, rather than just one slack variable per example as in `(?). 3 in ? that hold with equality. The red lines denote the pseudo-max constraints in ?ps that hold with equality. For x such that y(x) = (1, 0) or (0, 1), the pseudo-max and exact constraints are identical. We can identify J ? by obtaining samples x = (x1 , x2 ) that explore both sides of one of the decision boundaries that depends on J ? . The pseudo-max constraints will fail to identify J ? if the samples do not sufficiently explore the transitions between y = (0, 1) and y = (1, 1) or between y = (1, 0) and y = (1, 1). This can happen, for example, when the input samples are dependent, giving only rise to the configurations y = (0, 0) and y = (1, 1). For points labeled (1, 1) around the decision line J ? + x1 + x2 = 0, pseudo-max can only tell that they respect J ? + x1 ? 0 and J ? + x2 ? 0 (dashed red lines), or x1 ? 0 and x2 ? 0 for points labeled (0, 0). Only constraints that depend on the parameter are effective for learning. For pseudo-max to be able to identify J ? , the input samples must be continuous, densely populating the two parameter dependent decision lines that pseudo-max can use. The two point sets in the figure illustrate good and bad input distributions for pseudo-max. The diagonal set would work well with the exact constraints but badly with pseudo-max, and the difference can be arbitrarily large. However, the input distribution on the right, populating the J ? + x2 = 0 decision line, would permit pseudo-max to identify J ? . 2.1 Identifiability of True Parameters In this section, we show that it is possible to approximately identify the true model parameters, up to model equivalence, using the pseudo-max constraints and a carefully chosen linear number of data points. Consider the learning problem for structured prediction defined on a fixed graph G = (V, E) where the parameters to be learned are pairwise potential functions ?ij (yi , yj ) for ij ? E and single node fields ?i (yi ) for i ? V . We consider discriminant functions of the form P P P f (y; x, ?) = ij?E ?ij (yi , yj ) + i ?i (yi ) + i xi (yi ), (4) where the input space X = R|V |k specifies the single node potentials. Without loss of generality, we remove the additional degrees of freedom in ? by restricting it to be in a canonical form: ? ? ?can if for all edges ?ij (yi , yj ) = 0 whenever yi = 0 or yj = 0, and if for all nodes, ?i (yi ) = 0 when yi = 0. As a result, assuming the training set comes from a model in this class, and the input fields xi (yi ) exercise the discriminant function appropriately, we can hope to identify ? ? ? ?can . Indeed, we show that, for some data sets, the pseudo-max constraints are sufficient to identify ? ? . Let ?ps ({y m , xm }) be the set of parameters that satisfy the pseudo-max classification constraints  m ?ps ({y m , xm }) = ? | ?m, i, yi 6= yim , f (y m ; xm , ?) ? f (y m (5) ?i , yi ; x , ?) . For simplicity we omit the margin losses e(yim , yi ), since the input fields xi (yi ) already suffice to rule out the trivial solution ? = 0. Proposition 2.1. For any ? ? ? ?can , there is a set of 2|V |(k ? 1) + 2|E|(k ? 1)2 examples, {xm , y(xm ; ? ? )}, such that any pseudo-max consistent ? ? ?ps ({y m , xm }) ? ?can is arbitrarily close to ? ? . The proof is given in the supplementary material. To illustrate the key ideas, we consider the simpler binary discriminant function discussed in Eq. 3. Note that the binary model is already in the canonical form since Jij yi yj = 0 whenever yi = 0 or yj = 0. For any ij ? E, we show how to choose two input examples x1 and x2 such that any J consistent with the pseudo-max constraints for these ? ? two examples will have Jij ? [Jij ? , Jij + ]. Repeating this for all of the edge parameters then gives the complete set of examples. The input examples we need for this will depend on J ? . For the first example, we set the input fields for all neighbors of i (except j) in such a way that ? we force the corresponding labels to be zero. More formally, we set x1k < ?|N (k)| maxl |Jkl | for k ? N (i)\j, resulting in yk1 = 0, where y 1 = y(x1 ). In contrast, we set x1j to a large value, e.g. ? ? x1j > |N (j)| maxl |Jjl |, so that yj1 = 1. Finally, for node i, we set x1i = ?Jij +  so as to obtain a 1 slight preference for yi = 1. All other input fields can be set arbitrarily. As a result, the pseudo-max constraints pertaining to node i are f (y 1 ; x1 , J ) ? f (y 1?i , yi ; x1 , J ) for yi = 0, 1. By taking into account the label assignments for yi1 and its neighbors, and by removing terms that are the same on both sides of the equation, we get Jij + x1i + x1j ? Jij yi + yi x1i + x1j , which, for yi = 0, implies ? that Jij + x1i ? 0 or Jij ? Jij +  ? 0. The second example x2 differs only in terms of the input ? ? field for i. In particular, we set x2i = ?Jij ?  so that yi2 = 0. This gives Jij ? Jij + , as desired. 4 2.2 Consistency via Strict Convexity In this section we prove the consistency of the pseudo-max approach by showing that it corresponds to minimizing a strictly convex function. Our proof only requires that p(x) be non-zero for all x ? Rn (a simple example being a multi-variate Gaussian) and that J ? is finite. We use a discriminant function as in Eq. 3. Now, assume the input points xm are distributed according to p(x) and that y m are obtained via y m = arg maxy f (y; xm , J ? ). We can write the `ps (J ) objective for finite data, and its limit when M ? ?, compactly as: X   1 XX `ps (J ) = max (yi ? yim ) xm Jki ykm i + M m i yi k?N (i) X XZ   Jki yk (x) dx (6) p(x) max (yi ? yi (x)) xi + ? yi i k?N (i) ? where yi (x) is the label of i for input x when using parameters J . Starting from the above, consider the terms separately for each i. We partition the integral over x ? Rn into exclusive regions according to the predicted labels of the neighbors of i (given x). Define Sij = {x : yj (x) = 1 and yk (x) = 0 for k ? N (i)\j}. Eq. 6 can then be written as X X  `ps (J ) = g?i ({Jik }k?N (i) ) + gik (Jik ) , (7) i k?N (i) where gik (Jik ) = x?Sik p(x) maxyi [(yi ?yi (x))(xi +Jik )]dx and g?i ({Jik }k?N (i) ) contains all of the remaining terms, i.e. where either zero or more than one neighbor is set to one. The function g?i is convex in J since it is a sum of integrals over convex functions. We proceed to show that gik (Jik ) is strictly convex for all choices of i and k ? N (i). This will show that `ps (J ) is strictly convex since it is a sum over functions strictly convex in each one of the variables in J . R For all values xi ? (??, ?) there is some x in Sij . This is because for any finite xi and finite J ? , the other xj ?s can be chosen so as to give the y configuration corresponding to Sij . Now, since p(x) has full support, we have P (Sij ) > 0 and p(x) > 0 for any x in Sij . As a result, this also holds for the marginal pi (xi |Sij ) over xi within Sij . After some algebra, we obtain: Z ? Z gij (Jij ) = P (Sij ) p(x)yi (x)(xi + Jij )dx pi (xi |Sij ) max [0, xi + Jij ] dxi ? ?? x?Sij The integral over the yi (x)(xi + Jij ) expression just adds a linear term to gij (Jij ). The relevant remaining term is (for brevity we drop P (Sij ), a strictly positive constant, and the ij index): Z ? Z ? ? i , J)dxi h(J) = pi (xi |Sij ) max [0, xi + J] dxi = pi (xi |Sij )h(x (8) ?? ?? ? i , J) = max [0, xi + J]. Note that h(J) is convex since h(x ? i , J) is convex in J where we define h(x for all xi . We want to show that h(J) is strictly convex. Consider J 0 < J and ? ? (0, 1) and define ? i , J) + (1 ? ?)h(x ? i, J 0) > the interval I = [?J, ??J ? (1 ? ?)J 0 ]. For xi ? I it holds that: ?h(x 0 ? h(xi , ?J + (1 ? ?)J ) (since the first term is strictly positive and the rest are zero). For all other x, ? is always convex in J). We thus have after this inequality holds but is not necessarily strict (since h 0 integrating over x that ?h(J) + (1 ? ?)h(J ) > h(?J + (1 ? ?)J 0 ), implying h is strictly convex, as required. Note that we used the fact that p(x) has full support when integrating over I. The function `ps (J ) is thus a sum of strictly convex functions in all its variables (namely g(Jik )) plus other convex functions of J , hence strictly convex. We can now proceed to show consistency. By strict convexity, the pseudo-max objective is minimized at a unique point J . Since we know that `ps (J ? ) = 0 and zero is a lower bound on the value of `ps (J ), it follows that J ? is the unique minimizer. Thus we have that as M ? ?, the minimizer of the pseudo-max objective is the true parameter vector, and thus we have consistency. As an example, consider the case of two variables y1 , y2 , with x1 and x2 distributed according to ? N (c1 , 1), N (0, 1) respectively. Furthermore assume J12 = 0. Then simple direct calculation yields: Z ?c1 2 2 2 c1 + J12 1 1 g(J12 ) = ? (9) e?x /2 dx ? ? e?c1 /2 + ? e?(J12 +c1 ) /2 2? 2? 2? ?J12 ?c1 which is indeed a strictly convex function that is minimized at J = 0 (see Fig. 1 for an illustration). 5 3 Hardness of Structured Learning Most structured prediction learning algorithms use some form of inference as a subroutine. However, the corresponding prediction task is generally NP-hard. For example, maximizing the discriminant function defined in Eq. 3 is equivalent to solving Max-Cut, which is known to be NP-hard. This raises the question of whether it is possible to bypass prediction during learning. Although prediction may be intractable for arbitrary MRFs, what does this say about the difficulty of learning with a polynomial number of data points? In this section, we show that the problem of deciding whether there exists a parameter vector that separates the training data is NP-hard. Put in the context of the positive results in this paper, these hardness results show that, although in some cases the pseudo-max constraints yield a consistent estimate, we cannot hope for a certificate of optimality. Put differently, although the pseudo-max constraints in the separable case always give an outer bound on ? (and may even be a single point), ? could be the empty set ? and we would never know the difference. Theorem 3.1. Given labeled examples {(xm , y m )}M m=1 for a fixed but arbitrary graph G, it is NP-hard to decide whether there exists parameters ? such that ?m, y m = arg maxy f (y; xm , ?). Proof. Any parameters ? have an equivalent parameterization in canonical form (see section Sec. 2.1, also supplementary). Thus, the examples will be separable if and only if they are separable by some ? ? ?can . We reduce from unweighted Max-Cut. The Max-Cut problem is to decide, given an undirected graph G, whether there exists a cut of at least K edges. Let G be the same graph as G, with k = 3 states per variable. We construct a small set of examples where a parameter vector will exist that separates the data if and only if there is no cut of K or more edges in G. 0 Let ? be parameters in canonical form equivalent to ?ij (yi , yj ) = 1 if (yi , yj ) ? {(1, 2), (2, 1)}, 0 if yi = yj , and ?n2 if (yi , yj ) ? {(1, 3), (2, 3), (3, 1), (3, 2)}. We first construct 4n + 8|E| examples, using the technique described in Sec. 2.1 (also supplementary material), which when restricted to the space ?can , constrain the parameters to equal ?. We then use one more example (xm , y m ) where K?1 m m y m = 3 (every node is in state 3) and, for all i, xm i (3) = n and xi (1) = xi (2) = 0. The first two states encode the original Max-Cut instance, while the third state is used to construct a labeling y m that has value equal to K ? 1, and is otherwise not used. Let K ? be the value of the maximum cut in G. If in any assignment to the last example there is a variable taking the state 3 and another variable taking the state 1 or 2, then the assignment?s value will be at most K ? ? n2 , which is less than zero. By construction, the 3 assignment has value K ? 1. Thus, the optimal assignment must either be 3 with value K ? 1, or some combination of states 1 and 2, which has value at most K ? . If K ? > K ? 1 then 3 is not optimal and the examples are not separable. If K ? ? K ? 1, the examples are separable. This result illustrates the potential difficulty of learning in worst-case graphs. Nonetheless, many problems have a more restricted dependence on the input. For example, in computer vision, edge potentials may depend only on the difference in color between two adjacent pixels. Our results do not preclude positive results of learnability in such restricted settings. By establishing hardness of learning, we also close the open problem of relating hardness of inference and learning in structured prediction. If inference problems can be solved in polynomial time, then so can learning (using, e.g., structured perceptron). Thus, when learning is hard, inference must be hard as well. 4 Experiments To evaluate our learning algorithm, we test its performance on both synthetic and real-world datasets. We show that, as the number of training samples grows, the accuracy of the pseudo-max method improves and its speed-up gain over competing algorithms increases. Our learning algorithm corresponds to solving the following, where we add L2 regularization and use a scaled 0-1 loss, e(yi , yim ) = 1{yi 6= yim }/nm (nm is the number of labels in example m): nm M X h i X C m m m m 2 min P max f (y m (10) ?i , yi ; x , ?) ? f (y ; x , ?) + e(yi , yi ) + k?k . yi ? n m m m=1 i=1 We will compare the pseudo-max method with learning using structural SVMs, both with exact inference and LP relaxations [see, e.g., 4]. We use exact inference for prediction at test time. 6 (a) Synthetic (b) Reuters 0.4 exact LP?relaxation pseudo?max 0.15 Test error Test error 0.2 0.1 0.05 0 1 10 2 10 0.2 0.1 0 1 10 3 10 Train size exact LP?relaxation pseudo?max 0.3 2 10 3 10 4 10 Train size Figure 2: Test error as a function of train size for various algorithms. Subfigure (a) shows results for a synthetic setting, while (b) shows performance on the Reuters data. P In the synthetic setting we use the discriminant function f (y; x, ?) = ij?E ?ij (yi , yj ) + P x ? (y ), which is similar to Eq. 4. We take a fully connected graph over n = 10 binary labels. i i i i For a weight vector ? ? (sampled once, uniformly in the range [?1, 1], and used for all train/test sets) we generate train and test instances by sampling xm uniformly in the range [?5, 5] and then computing the optimal labels y m = arg maxy?Y f (y; xm , ? ? ). We generate train sets of increasing size (M = {10, 50, 100, 500, 1000, 5000}), run the learning algorithms, and measure the test error for the learned weights (with 1000 test samples). For each train size we average the test error over 10 repeats of sampling and training. Fig. 2(a) shows a comparison of the test error for the three learning algorithms. For small numbers of training examples, the test error of pseudo-max is larger than that of the other algorithms. However, as the train size grows, the error converges to that of exact learning, as our consistency results predict. We also test the performance of our algorithm on a multi-label document classification task from the Reuters dataset [7]. The data consists of M = 23149 training samples, and we use a reduction of the dataset to the 5 most frequent labels. The 5 label variables form a fully connected pairwise graph structure (see [4] for a similar setting). We use random subsamples of increasing size from the train set to learn the parameters, and then measure the test error using 20000 additional samples. For each sample size and learning algorithm, we optimize the trade-off parameter C using 30% of the training data as a hold-out set. Fig. 2(b) shows that for the large data regime the performance of pseudo-max learning gets close to that of the other methods. However, unlike the synthetic setting there is still a small gap, even after seeing the entire train set. This could be because the full dataset is not yet large enough to be in the consistent regime (note that exact learning has not flattened either), or because the consistency conditions are not fully satisfied: the data might be non-separable or the support of the input distribution p(x) may be partial. We next apply our method to the problem of learning the energy function for protein side-chain placement, mirroring the learning setup of [14], where the authors train a conditional random field (CRF) using tree-reweighted belief propagation to maximize a lower bound on the likelihood.5 The prediction problem for side-chain placement corresponds to finding the most likely assignment in a pairwise MRF, and fits naturally into our learning framework. There are only 8 parameters to be learned, corresponding to a reweighting of known energy terms. The dataset consists of 275 proteins, where each MRF has several hundred variables (one per residue of the protein) and each variable has on average 20 states. For prediction we use CPLEX?s ILP solver. Fig. 3 shows a comparison of the pseudo-max method and a cutting-plane algorithm which uses an LP relaxation, solved with CPLEX, for finding violated constraints.6 We generate training sets of increasing size (M = {10, 50, 100, 274}), and measure the test error for the learned weights on the remaining examples.7 For M = 10, 50, 100 we average the test error over 3 random train/test splits, whereas for M = 274 we do 1-fold cross validation. We use C = 1 for both algorithms. 5 The authors? data and results are available from: http://cyanover.fhcrc.org/recomb-2007/ We significantly optimized the cutting-plane algorithm, e.g. including a large number of initial cuttingplanes and restricting the weight vector to be positive (which we know to hold at optimality). 7 Specifically, for each protein we compute the fraction of correctly predicted ?1 and ?2 angles for all residues (except when trivial, e.g. just 1 state). Then, we compute the median of this value across all proteins. 6 7 Time to train (minutes) Test error (?1 and ?2) 0.27 0.265 pseudo?max LP?relaxation Soft rep 0.26 0.255 0.25 0 50 100 150 200 Train size 250 250 200 pseudo?max LP?relaxation 150 100 50 0 0 50 100 150 200 Train size 250 Figure 3: Training time (for one train/test split) and test error as a function of train size for both the pseudomax method and a cutting-plane algorithm which uses a LP relaxation for inference, applied to the problem of learning the energy function for protein side-chain placement. The pseudo-max method obtains better accuracy than both the LP relaxation and HCRF (given roughly five times more data) for a fraction of the training time. The original weights (?Soft rep? [3]) used for this energy function have 26.7% error across all 275 proteins. The best previously reported parameters, learned in [14] using a Hidden CRF, obtain 25.6% error (their training set included 55 of these 275 proteins, so this is an optimistic estimate). To get a sense of the difficulty of this learning task, we also tried a random positive weight vector, uniformly sampled from the range [0, 1], obtaining an error of 34.9% (results would be much worse if we allowed the weights to be negative). Training using pseudo-max with 50 examples, we learn parameters in under a minute that give better accuracy than the HCRF. The speed-up of training with pseudo-max (using CPLEX?s QP solver) versus cutting-plane is striking. For example, for M = 10, pseudo-max takes only 3 seconds, a 1000-fold speedup. Unfortunately the cutting-plane algorithm took a prohibitive amount of time to be able to run on the larger training sets. Since the data used in learning for protein side-chain placement is both highly non-separable and relatively little, these positive results illustrate the potential wide-spread applicability of the pseudo-max method. 5 Discussion The key idea of our method is to find parameters that prefer the true assignment y m over assignments that differ from it in only one variable, in contrast to all other assignments. Perhaps surprisingly, this weak requirement is sufficient to achieve consistency given a rich enough input distribution. One extension of our approach is to add constraints for assignments that differ from y m in more than one variable. This would tighten the outer bound on ? and possibly result in improved performance, but would also increase computational complexity. We could also add such competing assignments via a cutting-plane scheme so that optimization is performed only over a subset of these constraints. Our work raises a number of important open problems: It would be interesting to derive generalization bounds to understand the convergence rate of our method, as well as understanding the effect of the distribution p(x) on these rates. The distribution p(x) needs to have two key properties. On the one hand, it needs to explore the space Y in the sense that a sufficient number of labels need to be obtained as the correct label for the true parameters (this is indeed used in our consistency proofs). On the other hand, p(x) needs to be sufficiently sensitive close to the decision boundaries so that the true parameters can be inferred. We expect that generalization analysis will depend on these two properties of p(x). Note that [11] studied active learning schemes for structured data and may be relevant in the current context. How should one apply this learning algorithm to non-separable data sets? We suggested one approach, based on using a hinge loss for each of the pseudo constraints. One question in this context is, how resilient is this learning algorithm to label noise? Recent work has analyzed the sensitivity of pseudo-likelihood methods to model mis-specification [8], and it would be interesting to perform a similar analysis here. Also, is it possible to give any guarantees for the empirical and expected risks (with respect to exact inference) obtained by outer bound learning versus exact learning? Finally, our algorithm demonstrates a phenomenon where more data can make computation easier. Such a scenario was recently analyzed in the context of supervised learning [12], and it would be interesting to combine the approaches. Acknowledgments: We thank Chen Yanover for his assistance with the protein data. This work was supported by BSF grant 2008303 and a Google Research Grant. D.S. was supported by a Google PhD Fellowship. 8 References [1] J. Besag. The analysis of non-lattice data. The Statistician, 24:179?195, 1975. [2] M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In EMNLP, 2002. [3] G. Dantas, C. Corrent, S. L. Reichow, J. J. Havranek, Z. M. Eletr, N. G. Isern, B. Kuhlman, G. Varani, E. A. Merritt, and D. Baker. High-resolution structural and thermodynamic analysis of extreme stabilization of human procarboxypeptidase by computational protein design. Journal of Molecular Biology, 366(4):1209 ? 1221, 2007. [4] T. Finley and T. Joachims. Training structural SVMs when exact inference is intractable. In Proceedings of the 25th International Conference on Machine Learning 25, pages 304?311. ACM, 2008. [5] T. Joachims, T. Finley, and C.-N. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1):27?59, 2009. [6] A. Kulesza and F. Pereira. Structured learning with approximate inference. In Advances in Neural Information Processing Systems 20, pages 785?792. 2008. [7] D. Lewis, , Y. Yang, T. Rose, and F. Li. RCV1: a new benchmark collection for text categorization research. JMLR, 5:361?397, 2004. [8] P. Liang and M. I. Jordan. An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators. In Proceedings of the 25th international conference on Machine learning, pages 584?591, New York, NY, USA, 2008. ACM Press. [9] A. F. T. Martins, N. A. Smith, and E. P. Xing. Polyhedral outer approximations with application to natural language parsing. In ICML 26, pages 713?720, 2009. [10] N. Ratliff, J. A. D. Bagnell, and M. Zinkevich. (Online) subgradient methods for structured prediction. In AISTATS, 2007. [11] D. Roth and K. Small. Margin-based active learning for structured output spaces. In Proc. of the European Conference on Machine Learning (ECML). Springer, September 2006. [12] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size. In Proceedings of the 25th international conference on Machine learning, pages 928?935. ACM, 2008. [13] B. Taskar, C. Guestrin, and D. Koller. Max margin Markov networks. In Advances in Neural Information Processing Systems 16, pages 25?32. 2004. [14] C. Yanover, O. Schueler-Furman, and Y. Weiss. Minimizing and learning energy functions for side-chain prediction. Journal of Computational Biology, 15(7):899?911, 2008. 9
3921 |@word version:1 polynomial:3 stronger:1 open:2 tried:1 reduction:1 initial:1 configuration:5 contains:1 document:1 current:1 comparing:1 yet:1 dx:4 must:4 parsing:2 jkl:1 written:1 happen:1 partition:2 remove:1 drop:1 implying:1 generative:1 prohibitive:1 amir:1 parameterization:1 plane:10 yi1:1 smith:1 certificate:1 node:7 preference:1 org:1 simpler:2 five:1 direct:2 prove:1 consists:2 combine:1 polyhedral:1 pairwise:4 hardness:7 expected:1 indeed:4 roughly:1 xz:1 multi:3 inspired:1 little:1 preclude:1 considering:1 increasing:3 becomes:1 begin:1 xx:1 solver:2 suffice:1 maxyi:1 baker:1 what:1 substantially:1 jik:7 finding:3 transformation:1 guarantee:1 pseudo:56 every:3 classifier:1 scaled:1 demonstrates:1 partitioning:1 grant:2 omit:1 yn:1 before:1 positive:7 understood:1 limit:1 establishing:1 approximately:1 might:1 plus:1 studied:1 jki:2 equivalence:2 ease:1 limited:1 range:3 unique:2 globerson:1 acknowledgment:1 yj:13 differs:1 empirical:1 significantly:1 word:1 integrating:2 seeing:1 protein:15 get:4 cannot:2 close:4 put:2 context:4 risk:1 optimize:1 equivalent:4 map:1 zinkevich:1 roth:1 maximizing:2 go:1 starting:1 independently:1 convex:16 resolution:1 simplicity:1 rule:3 bsf:1 estimator:1 his:1 j12:6 coordinate:1 analogous:1 target:1 construction:1 exact:17 programming:1 us:3 cut:7 native:1 labeled:5 predicts:1 ising:1 yk1:1 taskar:1 solved:3 worst:1 region:3 connected:2 trade:1 yk:2 rose:1 intuition:1 convexity:3 complexity:1 depend:4 solving:4 raise:2 algebra:1 compactly:1 differently:1 various:1 train:17 effective:1 pertaining:1 tell:1 labeling:1 choosing:1 shalev:1 whose:1 supplementary:3 larger:2 say:1 otherwise:1 itself:2 online:1 subsamples:1 sequence:2 took:1 propose:1 interaction:1 coming:1 jij:19 frequent:1 relevant:2 achieve:1 getting:1 convergence:1 empty:3 p:18 requirement:1 generating:3 categorization:1 converges:1 illustrate:4 derive:1 fixing:2 ij:11 eq:8 predicted:2 involves:1 come:1 implies:1 tommi:1 differ:4 correct:1 modifying:1 stochastic:1 stabilization:1 human:1 material:2 meshi:1 require:1 resilient:1 generalization:2 proposition:1 anticipate:1 strictly:12 extension:1 hold:9 sufficiently:3 around:2 deciding:2 scope:1 predict:4 achieves:1 purpose:1 estimation:1 proc:1 label:20 sensitive:1 successfully:1 minimization:1 hope:2 mit:1 always:4 gaussian:1 rather:3 jaakkola:1 encode:1 focus:2 joachim:2 likelihood:8 contrast:2 besag:1 sense:3 inference:15 mrfs:3 dependent:3 entire:1 hidden:2 koller:1 subroutine:1 provably:1 pixel:1 arg:6 classification:2 denoted:1 special:1 jjl:1 marginal:1 field:8 construct:3 never:1 having:2 equal:2 sampling:3 once:1 identical:1 biology:2 yu:1 icml:1 minimized:2 np:5 simultaneously:1 densely:1 cplex:3 statistician:1 microsoft:1 freedom:1 investigate:1 highly:1 schueler:1 violation:1 argmaxy:1 analyzed:2 extreme:1 chain:6 integral:4 edge:5 partial:1 tree:2 desired:1 theoretical:1 subfigure:1 instance:3 classify:1 soft:2 obstacle:1 assignment:20 maximization:5 lattice:1 applicability:1 vertex:2 subset:2 hundred:1 learnability:1 reported:1 synthetic:5 international:3 sensitivity:1 csail:1 off:1 quickly:1 nm:3 satisfied:1 choose:2 possibly:1 emnlp:1 worse:1 li:1 account:1 potential:6 singleton:1 sec:4 satisfy:1 depends:1 performed:2 view:1 break:1 optimistic:1 furman:1 red:3 xing:1 identifiability:2 elaborated:1 minimize:2 accuracy:4 yield:5 identify:7 weak:1 populating:2 whenever:2 nonetheless:1 energy:5 naturally:1 proof:5 dxi:3 mi:1 hamming:1 gain:1 sampled:2 dataset:4 ask:1 knowledge:1 fractional:1 color:1 improves:1 organized:1 formalize:1 carefully:1 x1j:4 supervised:1 improved:2 wei:1 formulation:1 generality:1 furthermore:1 just:3 hand:2 parse:2 reweighting:1 propagation:1 google:2 perhaps:1 grows:2 usa:1 effect:1 true:8 y2:3 regularization:2 equality:3 hence:1 reweighted:1 adjacent:1 assistance:1 during:1 pthis:1 illustrative:1 outline:1 complete:1 demonstrate:1 crf:2 recently:1 empirically:2 ji:1 qp:1 exponentially:1 discussed:1 slight:1 relating:1 rd:2 consistency:16 language:2 specification:1 add:4 recent:2 optimizing:2 optimizes:1 scenario:1 inequality:2 binary:4 arbitrarily:3 rep:2 yi:62 guestrin:1 minimum:1 additional:2 employed:1 converge:1 maximize:1 dashed:1 violate:1 full:3 thermodynamic:1 calculation:1 cross:1 long:1 ykm:1 molecular:1 paired:1 prediction:25 mrf:4 vision:1 iteration:1 achieved:1 folding:1 c1:10 whereas:1 want:1 separately:1 residue:2 interval:1 fellowship:1 median:1 appropriately:1 rest:1 unlike:1 strict:5 markedly:1 induced:1 undirected:1 spirit:2 jordan:1 integer:1 structural:4 yang:1 split:2 enough:3 easy:1 xj:1 variate:1 fit:1 competing:4 identified:1 inner:1 idea:2 reduce:1 whether:6 expression:1 x1k:1 sontag:1 proceed:2 york:1 repeatedly:2 mirroring:1 generally:1 amount:1 repeating:1 svms:3 generate:3 specifies:1 http:1 exist:1 canonical:4 per:5 correctly:2 blue:2 write:1 shall:1 key:4 four:2 drawn:1 graph:8 subgradient:2 relaxation:13 fraction:2 sum:3 run:2 angle:1 inverse:1 striking:1 place:1 decide:2 separation:1 decision:6 prefer:1 scaling:1 bound:10 distinguish:1 fold:2 replaces:1 badly:1 placement:5 constraint:31 infinity:1 constrain:1 x2:13 carve:1 speed:2 argument:1 min:3 optimality:2 rcv1:1 separable:14 relatively:1 martin:1 speedup:1 structured:22 according:4 combination:1 across:2 separability:1 lp:12 making:1 maxy:7 invariant:1 sij:13 restricted:3 computationally:1 equation:1 previously:1 slack:2 fail:2 ilp:1 know:3 serf:1 yj1:1 ofer:1 available:3 permit:1 multiplied:1 apply:3 yim:5 alternative:1 altogether:1 original:2 denotes:3 remaining:3 hinge:3 giving:1 prof:1 objective:11 added:1 already:2 question:2 primary:1 exclusive:1 dependence:2 diagonal:2 bagnell:1 september:1 separate:4 thank:1 separating:1 outer:6 considers:1 discriminant:9 trivial:4 assuming:1 index:1 illustration:2 minimizing:2 hebrew:2 liang:1 difficult:1 setup:1 unfortunately:1 negative:1 rise:1 ratliff:1 design:1 perform:1 upper:1 markov:3 datasets:1 benchmark:1 finite:5 ecml:1 looking:1 y1:3 rn:2 arbitrary:4 inferred:1 david:1 namely:1 required:1 sentence:2 optimized:1 learned:7 address:1 able:2 suggested:1 xm:26 regime:2 kulesza:1 program:1 max:61 including:1 belief:1 difficulty:4 natural:2 circumvent:2 predicting:1 force:1 yanover:2 scheme:2 x2i:1 finley:2 text:1 nice:1 understanding:1 l2:1 asymptotic:1 loss:6 expect:2 fully:3 interesting:3 limitation:1 recomb:1 srebro:1 versus:2 validation:1 degree:1 sufficient:3 consistent:8 principle:1 sik:1 bypass:1 pi:4 hcrf:2 repeat:1 last:1 surprisingly:1 supported:2 infeasible:1 side:8 pseudolikelihood:1 understand:1 perceptron:3 neighbor:4 wide:1 taking:4 characterizing:1 distributed:2 boundary:3 maxl:2 complicating:1 transition:1 world:1 rich:2 unweighted:1 author:2 collection:1 simplified:1 tighten:1 approximate:2 obtains:1 cutting:10 keep:1 active:2 xi:25 discriminative:2 shwartz:1 continuous:1 learn:4 symmetry:1 obtaining:2 complex:1 necessarily:1 european:1 aistats:1 yi2:1 spread:1 reuters:3 noise:1 arise:1 n2:2 allowed:1 x1:15 fig:5 slow:1 ny:1 pereira:1 exercise:1 candidate:2 x1i:4 jmlr:1 third:1 removing:1 theorem:1 minute:2 bad:1 showing:1 maxi:1 svm:1 gik:3 intractable:4 exists:3 restricting:2 importance:1 flattened:1 phd:1 illustrates:1 margin:3 gap:1 easier:2 chen:1 intersection:1 likely:2 explore:3 springer:1 corresponds:4 minimizer:2 violator:1 satisfies:1 acm:3 lewis:1 succeed:2 conditional:2 goal:4 viewed:1 presentation:1 hard:8 included:1 typical:1 except:2 uniformly:3 specifically:1 gij:2 formally:1 support:3 latter:1 collins:1 brevity:1 violated:2 avoiding:1 evaluate:1 phenomenon:1
3,226
3,922
On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient Jie Tang and Pieter Abbeel Department of Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94709 {jietang, pabbeel}@eecs.berkeley.edu Abstract Likelihood ratio policy gradient methods have been some of the most successful reinforcement learning algorithms, especially for learning on physical systems. We describe how the likelihood ratio policy gradient can be derived from an importance sampling perspective. This derivation highlights how likelihood ratio methods under-use past experience by (i) using the past experience to estimate only the gradient of the expected return U (?) at the current policy parameterization ?, rather than to obtain a more complete estimate of U (?), and (ii) using past experience under the current policy only rather than using all past experience to improve the estimates. We present a new policy search method, which leverages both of these observations as well as generalized baselines?a new technique which generalizes commonly used baseline techniques for policy gradient methods. Our algorithm outperforms standard likelihood ratio policy gradient algorithms on several testbeds. 1 Introduction Policy gradient methods have been some of the most effective learning algorithms for dynamic control tasks in robotics. They have been applied to a variety of complex real-world reinforcement learning problems, such as hitting a baseball with an articulated arm robot [1], constrained humanoid robotic motion planning [2], and learning gaits for legged robots [3, 4, 5]. For such robotics tasks real-world trials are typically the most time consuming factor in the learning process. Making efficient use of limited experience is crucial for good performance. In this paper we describe a novel connection between likelihood ratio based policy gradient methods and importance sampling. Specifically, we show that the likelihood ratio policy gradient estimate is equivalent to the gradient of an importance sampled estimate of the expected return function estimated using only data from the current policy. This insight indicates that likelihood ratio policy gradients are quite naive in terms of data use, and suggests an opportunity for novel algorithms which use all past data more efficiently by working with the importance sampled expected return function directly. Our main contributions are as follows. First, we develop algorithms for global search over the importance sampled expected return function, allowing us to make more progress for a given amount of experience. Our approach uses estimates of the importance sampling variance to constrain the search in a principled way. Second, we derive generalizations of optimal policy gradient baselines which are applicable to the importance sampled expected return function. Section 2 describes preliminaries on Markov decision processes (MDPs), policy gradient methods and importance sampling. Section 3 describes the novel connection between importance sampling and likelihood ratio policy gradients, and Section 4 examines our novel minimum variance baselines. Section 5 outlines our proposed method. Section 6 relates our method to prior work. Section 7 demonstrates the effectiveness of the proposed methods on standard reinforcement learning testbeds. 1 2 Preliminaries Markov Decision Processes. A Markov decision process (MDP) is a tuple (S, A, T, R, D, ?, H), where S is a set of states; A is a set of actions/inputs; T = {P (?|s, u)}s,u is a set of state transition probabilities (P (?|s, u) is the state transition distribution upon taking action u in state s); R : S ? A 7? R is the reward function; D is a distribution over states from which the initial state s0 is drawn; 0 < ? < 1 is the discount factor; and H is the horizon time of the MDP, so that the MDP terminates after H steps1 . A policy ? is a mapping from states S to a probability distribution over the set of actions A. We will consider policies parameterized by a vector ? ? Rn . We denote the expected return of a policy ?? by i P hP H t (2.1) U (?) = EP (? ;?) t=0 ? R(st , ut )|?? = ? P (? ; ?)R(? ). Here P (? ; ?) is the probability distribution induced by the policy ?? over all possible state-action traPH jectories ? = (s0 , u0 , s1 , u1 , . . . , sH , uH ). We overload notation and let R(? ) = t=0 ? t R(st , ut ) be the (discounted) sum of rewards accumulated along the state-action trajectory ? . Likelihood Ratio Policy Gradient. Likelihood ratio policy gradient methods perform a (stochastic) gradient ascent over the policy parameter space ? to find a local optimum of U (?). One wellknown technique called REINFORCE [6, 7] expresses the gradient ?? U (?) as follows: Pm 1 (i) g = ?? U (?) = EP (? ;?) [?? log P (? ; ?)R(? )] ? g? = m ; ?)R(? (i) ), i=1 ?? log P (? where the rightmost expression provides us an unbiased estimate of the policy gradient from m sample paths {? (1) , . . . , ? (m) } obtained from acting under policy ?? . Using the Markov assumption, we can decompose P (? ; ?) into a product of conditional probabilities and we obtain PH (i) (i) ?? log P (? (i) ; ?) = t=0 ?? log ?? (ut |st ). Hence no access to a dynamics model is required to compute an unbiased estimate of the policy gradient. REINFORCE has been shown to be moderately efficient in terms of number of samples used [6, 7]. To P reduce the variance it is common to use baselines. Since EP [?? log P (? ; ?)] = ?? ? P (? ; ?) = ?? 1 = 0 we can add b> ?? log P (? ; ?) (where b is a vector which can be optimized to minimize variance) to the REINFORCE gradient estimate without biasing it [8, 9]. Past work often used a scalar b, resulting in: Pm 1 (i) ?? U (?) = EP (? ;?) [?? log P (? ; ?)(R(? ) ? b)] ? g? = m ; ?)(R(? (i) ) ? b). i=1 ?? log P (? Importance Sampling. For a general function f and a probability measure P , computing a quantity of interest of the form R EP (X) [f (X)] = x P (x)f (x)dx. can be computationally challenging. The expectation is often approximated with a sample-based estimate. However, samples from P could be difficult to obtain, or P might have very low probability where f takes its largest values. Importance sampling provides an alternative solution which uses samples from a different distribution Q. Given samples from Q, we can estimate the expectation w.r.t. P as: h i P (X) EP (X) [f (X)] = EQ(X) Q(X) f (X) Pm P (x(i) ) 1 (i) (i) ? m ?Q i=1 Q(x(i) ) f (x ) with x In the above, we assume Q(x) = 0 ? P (x) = 0. Hence, one can sample from a different distribution Q and then simply re-weight the samples to obtain an unbiased estimate. This can be readily leveraged to estimate the expected return of a stochastic policy [10] as follows: (i) b (?) = 1 Pm P (? (i);?) R(? (i) ), ? (i) ? Q U (2.2) i=1 Q(? ) m where we assume Q(? ) = 0 ? P (? ; ?) = 0. If we choose Q(? ) = P (? ; ?0 ), then we are estimating the return of a policy ?? from sample paths obtained from acting according to aQpolicy ??0 . EvaluatH (? (i) ;?) t=0 ?? (ut |st ) QH . If we ing the importance weights does not require a dynamics model: PP(? (i) ;? 0 ) = ? 0 (u |s ) t=0 ? t t have samples from many differentP distributions P (? ; ?(j) ), a standard technique is to create a fused m 1 empirical distribution Q(? ) = m j=1 P (? ; ?(j) ) to enable use of all past data [10]. 1 Any infinite horizon MDP with discounted rewards can be -approximated by a finite horizon MDP, using a horizon H = dlog? ((1 ? ?)/Rmax )e, where Rmax = maxs |R(s)|. 2 3 Likelihood Ratio Policy Gradient via Importance Sampling We now outline a novel connection between policy gradients and importance sampling. A set of trajectories {? (1) , . . . , ? (m) } sampled from policy ??? induces a distribution over paths Q(? ) = b (?? ) denote the importance sampled estimate of U (?) at ?? . Using Equation (2.2), P (? ; ?? ). Let U we have: Pm b ?P (? (i) ;? ? ) ?U 1 1 ? R(? (i) ) i=1 Q(? (i) ) ??j (? ) = m ??j Pm P (? (i) ;?? ) ? log P (? (i) ;?? ) 1 =m R(? (i) ) i=1 Q(? (i) ) ??j Pm ? log P (? (i) ;?? ) 1 R(? (i) ) (using Q(? ) = P (? ; ?? )). (3.1) =m i=1 ??j Equation 3.1 is the j?th entry of the likelihood ratio based estimate of the gradient of U (?) at ?? . This analysis shows that the standard likelihood ratio policy gradient can be interpreted as forming an importance sampling based estimate of the expected return based on the runs under the current policy ??? and then using this estimate of the expected return function only to estimate a gradient at ?? . In doing so, it fails to make efficient use of the trials from past policies: (i) It only uses the b (?) at the point ?? , rather than all information provided by the function gradient of the function U b U (?), and (ii) It only uses the runs under the most recent policy ??? , rather than using a more informed importance sampling based estimate that uses all past data. Instead of only using local information from a single policy to drive our learning, we can use global b (?) using trials run under all past policies. Such importance sampling information provided by U based methods (as have been proposed in [10]) should be able to learn from fewer trial runs than the currently widely popular likelihood ratio based methods. Generalization to G(PO)MDP / Policy Gradient Theorem formulation. The observation that past rewards do not depend on future states or actions is leveraged by the G(PO)MDP [8] and the Policy Gradient Theorem [11] variations on REINFORCE to reduce the variance on their gradient estimates. This same observation can also be leveraged when estimating the expected return function itself. Let ?1:t denote the state action sequence experienced from time 1 through time t, then we have P P PH (3.2) U (?) = ? P (? ; ?)R(? ) = ? t=0 P (?1:t ; ?)R(st , ut ). For simplicity of notation we will continue to describe our approach in terms of the expression for U (?) given in Equation (2.1), but our generalization of baselines, and our policy search algorithm are equally applicable when using the expression for U (?) we present in Equation (3.2). 4 Generalized Unbiased Baselines Previous work has shown that the REINFORCE gradient estimate benefits greatly from the addition of an optimal baseline term [12, 9, 8]. In this section, we show that policy gradient baselines are special cases of a more general variance reduction technique. Our result generalizes policy gradient baselines in three ways: (i) It applies to estimating expectations of any random quantity, not just policy gradients; (ii) It allows for baseline matrices and higher-dimensional tensors, not just vectors; and (iii) It can be applied recursively to yield baseline terms for baselines since baselines are themselves expectations. Minimum Variance Unbiased Baselines. Given a random variable X ? P? (X), where P? is a parametric probability distribution with parameter ?, we have that EP? [?? log P? (X)] = 0. Pm 1 (i) Hence for any constant vector b and any scalar function h(X), we have that m i=1 (h(x ) ? b> ?? log P? (x(i) )) with x(i) drawn from P? is an unbiased estimator of the scalar quantity EP? [h(X)]. The variance of this estimator is minimized when the variance of the random variable g(X) = h(X) ? bT ?? logP? (X) is minimized. This variance is given by: 2 VarP? [h(X)?b> ?? log P? (X)] = EP? [ h(X) ? b> ?? log P? (X) ]?(EP? [h(X)?b> ?? log P? (X)])2 . As b> EP? [?? log P? (X)] = 0, the second term is independent of b. Setting the gradient of the first term with respect to b equal to zero yields the minimum variance baseline b = EP? [?? log P? (X)?? log P? (X)> ]?1 EP? [?? log P? (X)h(X)]. (4.1) The baselines commonly employed with REINFORCE, GPOMDP, and other likelihood ratio policy gradient methods can be derived as special cases of this generalized baseline [12]. 3 Minimum Variance Unbiased Baselines with Importance Sampling. When using importance sampling with x(i) drawn from Q, we have an unbiased estimator of the form Pm P? (x(i) ) 1 (i) > (i) i=1 Q(x(i) ) (h(x ) ? b ?? log P? (x )) with a minimum variance baseline vector m b = EQ h P? (X) ?? Q(X) ? (X) log P? (X) PQ(X) ?? log P? (X)> i?1 EQ h P? (X) ?? Q(X) i ? (X) log P? (X) PQ(X) h(X) . (4.2) Baselines. The minimum variance technique is naturally extended to vector-valued or matrixvalued random variables h(X). For each entry in h(X) we can compute a minimum variance baseline vector b using Equation (4.1) or (4.2). In general, if h(X) is an n-dimensional tensor, we can stack these baseline vectors into a n + 1-dimensional tensor. Indeed, in the case of REINFORCE we would obtain a baseline matrix, rather than a baseline scalar (as in the original work [7]) and rather than a vector baseline (as described in later work, such as [12]). The baselines themselves are estimated from sample data. Using standard policy gradient methods, it can be impractical to run enough trials to accurately fit such baselines. By using importance sampling to reuse data we can use richer baseline terms in our estimators. Recursive Baselines. The baselines are themselves composed of expectations. It is possible to recursively insert minimum variance unbiased baseline terms into these expectations in order to reduce the variance on the baseline estimates. However, the number of baseline parameters being estimated increases rapidly in this recursive process. Moreover, if we estimate multiple expectations from the same set of samples, these estimates become correlated and the final result is no longer unbiased. In practice, these baselines can be regularized to match the amount of available data. In Section 8 we empirically investigate the performance of several different baseline schemes. 5 b Policy Search Using U We propose the algorithm outlined in Figure 1. It uses importance sampling with optimal generalized b (?) of the expected return function based on the data gathered so far. baselines to obtain estimates U This estimator allows to search for a ? which improves the expected return. It maintains a list of candidate policy parameters from which it searches for improvements. Memory-based search allows backtracking away from unpromising parts of the search space without taking additional, costly trials on the real platform. Input: domain of policy parameters ?, initial policy ???0 for i = 0 to ... do 1.Run M trials under policy ???i 2. Search within ESS region for j = 1 : i do ?j ? ??j b (?j ) is improving do while U b (?j )) gj ? step direction(U b (?j ), gj ) ?j ? ESS aware line search(U ?j ? ?j + ?j gj end while end for b (?j ) 3. Update policy: ??i+1 = arg max?j U end for Figure 1: Our policy search algorithm. Estimate of Expected Returns: We use weighted importance sampling, and add a baseline to Equation (2.2): b (?) = U 1 Z P (? (i) ;?) (i) ) i=1 Q(? (i) ) (R(? Pm ? b> ?? log P (? (i) ; ?)), Z = P (? (i) ;?) i=1 Q(? (i) ) , Pm (5.1) where i indexes over all past trials, and Q is the empirical distribution over past trials (see Section 2). 4 Optimal Baseline: Applying Equation (4.2) we get the following sample based estimate of the optimal baseline b for the estimate of the expected return function:2  b= Pm  P (? (i) ;?) 2 ?? log P? (? (i) )?? log P (? (i) ; ?)> i=1   Pm  P (? (i) ;?) 2 (i) (i) 1 ? log P (? ; ?)R(? )) . ? i=1 m Q(? (i) ) 1 m ?1 Q(? (i) ) (5.2) ESS Search Region: As our policy search steps away from areas of ? where we have gathered b increases and our function estimate becomes unrelisample data, the variance of our estimator U m able. The effective sample size ESS = 1+Var(w is commonly used to measure the quality of an i) importance sampled estimate [13]. Here wi are the normalized importance weights and M is the number of trials. Our policy search only considers parameter values ? with sufficiently high ESS. b as the step direction for the inner loop of Step Direction: We use the finite-difference gradient of U the policy search. In theory, since every outer iteration searches for a local optimum within the ESS region, the choice of step direction affects only the amount of computation and not the number of trials required for convergence.3 Line Search: One issue with gradient based optimization methods is the need to choose the right step size. One solution is to use adaptive line search-based step size rules like the Armijo rule [15].4 For traditional likelihood ratio policy search methods this would require additional trials. By contrast, no new trials are required when using importance sampling.5 6 Prior Work Various past approaches use the idea of constructing a model of the system from sample data, which can be used to search for the optimal policy, e.g., [16], [10], [17]. In contrast to Sutton?s DYNA, our method attempts to directly optimize the expected return function by varying policy parameters rather than building a model for the environment. Cao [17] also uses importance sampling to reuse past data for estimating policy gradients, but focuses on estimating local gradient information rather than global surface information. The work of Peshkin and Shelton [10] is most similar in spirit to our policy search method. They use importance sampling to construct a ?proxy? environment from sampled data which can be used to evaluate the expected return at arbitrary policies. They apply a hill-climbing policy search to this ?proxy? surface. This technique does not use estimates of the importance sampling variance to restrict the search, does not use generalized minimum variance baselines, and does not use memory. Our experiments show that these improvements are necessary to outperform standard policy gradient methods across our test domains. Our general approach of estimating and optimizing the expected return function instead of the gradient of the expected return function allows for non-local policy steps. Recent EM-based policy search methods [18, 14] are able to make larger steps by optimizing a local lower bound on the expected return function. These methods can use importance sampling to make better use of data. This lower bound objective function and update step could be used in our memory based approach instead of following the finite difference gradient step. We explained throughout the paper the relationship with earlier methods such as REINFORCE [7, 6] and GPOMDP [8, 9]. PEGASUS [19] is an efficient alternative policy search method but can only be used if a simulation model is available. Recent work has suggested following the natural gradient direction [20, 21, 22]. The natural gradient approach is a parameterization invariant second order method which finds the direction which 2 Estimating the baseline from the same data as the other terms in Equation (5.1) results in a biased estimator. This is often done in policy gradient methods and we do so in our experiments. It is however possible to retain an unbiased estimate by data splitting, which could include averaging over resamplings. 3 b within the ESS region, differences in In practice, since we cannot always find the true optimum of U step direction do affect policies that are sampled. Other step directions or policy improvement rules may be substituted for the finite difference gradient step. For example, we could follow the natural gradient direction, or use an EM-based policy update [14]. 4 Though the Armijo rule has its own free parameters to choose, performance is much less sensitive to these hyper-parameters. We use the same Armijo rule parameters for all of our experiments. 5 We can extend standard likelihood ratio policy gradient methods to use the importance sampled expected return estimate. In our experience this approach yields results comparable to the best fixed hand-tuned step size for each problem?hence alleviating the need of these methods for tuning the step size. 5 (a) (b) (c) Figure 2: (a) Performance of various choices for the higher level baselines in our approach. We have a matrix baseline (MAT), and a recursive baseline (REC). For reference, we also plot our approach without an optimal baseline (GLO), GPOMDP (GP), and IS GPOMDP (ISGP). (b), (c) Performance evaluation on LQR and Cartpole. The algorithms considered are the GPOMDP likelihood ratio policy gradient method (GP), GPOMDP with importance sampling (ISGP), Peshkin and Shelton?s algorithm (PS), and our approach (OUR). maximize the ratio of the improvement of the objective function over the change in distribution over trajectories. Our approach exploits a similar intuition through consideration of variance through the effective sampling size (ESS)?preferring regions for which the past experience gives a good estimate. Natural actor critic (NAC) approaches have enjoyed substantial success on real-life robotics tasks [1, 23]. In the episodic setting, which we consider in this paper, the only difference between episodic NAC and natural gradient is in the estimate of the baseline. Episodic NAC computes a scalar baseline by solving an LSTD-Q type regression rather than, e.g., using a minimum variance baseline criterion.6 7 Experimental Setup We present experiments on four testbeds: LQR, cartpole, mountaincar, and acrobot. The details of each experimental testbed can be found in the appendix. Though the systems are simulated, the learning algorithms cannot make use of the simulation dynamics except by gathering trials. For each testbed we randomly generated a pool of initial policies until one is found that does not achieve the worst case return We then used our policy gradient algorithms to optimize performance. The same set of initial policies is used across learning algorithms. We focus on an analysis of performance when only allowed for a small number of trials: In each of the following experiments we run 50 iterations of policy search, running M trials for each policy at each iteration. 8 Experimental Results In our experimental results, we first evaluate several generalized baselines in the context of our policy search algorithm. We then break down the effectiveness of each component of our algorithm: memory based search, optimal baselines, and ESS search region. Our policy search outperform likelihood ratio methods on two of the testbeds and performs equally well on the two remaining ones. Performance is reported as the expected return versus the number of sampled trials. The expected return is plotted on the y-axis. Error bars are shown based on running each instance with 10 initial policies. The number of trials is plotted on the x-axis. Generalized Baseline Experiments: There are a variety of choices in our generalized baseline technique: We can vary the dimensionality of the baseline terms to add, the depth of the recursive baseline, and what (if any) regularization to use. We implemented our policy search using three different baseline techniques. We used a vector baseline, a matrix baseline, and a recursive tensor baseline on the matrix baseline. Figure 2 (a) shows the average reward received plotted against the number of trials run for the matrix (MAT) and recursive tensor (REC) baselines. The vector baseline was not able to improve the initial policies. The matrix baseline outperforms the other baselines and we use it going forward. Components of Our Approach: Figure 3 examines each of the central contributions of our algorithm (memory based search, baselines, and ESS). We tested our approach without any of the 6 The difference in performance due to different estimation procedure for the scalar baseline has been observed to be so small that only one plot is shown rather than both in [1]. 6 (a) (b) (c) Figure 3: This figure demonstrates the effect of (a) memory based search (b) optimal baselines, and (c) ESS search region on cartpole performance. In each figure, we show the performance of Peshkin and Shelton?s approach (PS) and our approach (OUR). In addition, we show the performance with memory only (PS+M), baselines only (PS+B), and ESS only (PS+E), and our approach with memory (OUR-M), baselines (OUR-B), and ESS (OUR-E) removed. GPOMDP (GP) and IS GPOMDP (ISGP) are also plotted for reference purposes. three components, which is equivalent to Peshkin and Shelton?s algorithm [10], which we label PS. We added each one of the three components individually, labeled PS+M, PS+B, PS+E. We also tested the performance with two out of three components, labeled OUR-M, OUR-B, and OUR-E respectively. Finally we tested the performance of our approach with all three components. The results indicate that each of the three components is improving performance with ESS and memory based being the most important components. Without any one of the components our approach has difficulty outperforming importance sampled GPOMDP. Comparison With Likelihood Ratio Policy Gradients: We have compared several episodic likelihood ratio algorithms against our global policy search algorithm. We run M = 10 trials per iteration, and repeat each trial 10 times. For the likelihood ratio algorithms, we use the appropriate optimal baselines [12] and hand-tune the step size. As a comparison, we have also implemented policy grab . Figure 2 plots the dient algorithms which use importance sampling to estimate the gradient of U reward received as a function of the number of real trials sampled from the system. We plot our global search approach against GPOMDP, an importance sampled GPOMDP (IS GPOMDP), and an implementation of Peshkin and Shelton?s global search.7 Our approach is consistently able to improve its initial policy, outperforming likelihood ratio policy gradient methods on both the cartpole and LQR testbeds. In general, importance sampling based methods outperform non-importance sampling based algorithms, which work poorly when given few trials. All algorithms in consideration performed poorly on the mountaincar and acrobot testbed?none of them showing significant improvement in performance through learning. 9 Conclusion We have shown that policy gradient methods are a special case of gradient descent over the imb . Since our approach provides a full approximation portance sampled expected return function U of the expected return function, we can use global information in addition to gradient information to achieve faster learning. We have also shown that optimal baselines for standard policy gradient methods can be seen as special cases of a more general variance reduction technique. Our importance sampling approach allows us to leverage more data to fit generalized baseline terms in our estimators. Our experiments show our algorithm requires fewer trials than current policy gradient methods on several testbeds and no more trials on the remaining testbeds, making it appealing for robotic learning tasks for which trials are expensive. 7 We do not plot REINFORCE as our experiments indicate that GPOMDP outperforms REINFORCE on these testbeds, a fact consistent with existing literature [1]. 7 Acknowledgments The authors thank Jan Peters and Hamid Reza Maei for insightful discussions and the anonymous reviewers for their feedback. This work was supported in part by NSF under award IIS-0931463. Jie Tang is supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program. Appendix (i) LQR: We use the formulation given in [21]. We use a linear parameterized policy with param1.998 1 8 eters K ? R2 , given by u(t) ? N (Lx(t), ?), L = ?1.999 + 1+e K1 and ? = 0.001 + 1+eK2 . The initial state is drawn from x(0) ? N (0.3, 0.1), and the dynamics are given by x(t + 1) = 0.7 ? x(t) + u(t) + N (0, 0.01). The system incurs a penalty of ?(x(t)2 + u(t)2 ) at each time step. Each episode was 20 time steps. (ii) Cartpole: This task consists of a cart moving along a track while balancing a pole. The goal of this task is to move the cartpole back to the origin as quickly as possible while keeping the pole upright. Following the formulation given in [24], our control input is drawn from the policy ? and policy parameters K = [K1 , K2 , K3 , K4 , ?]. u ? N (K > x, ?), with state x = [x, x, ? ?, ?] F ?mp l(?? cos ???? 2 sin ?) g sin ?(mc +mp )?(ut +mp l?? 2 sin ?) cos ? The dynamics are given by x ?= , and ?? = . 4 2 mc +mp 3 l(mc +mp )?mp l cos ? Here mp = 0.1, mc = 1.0, l = 0.5, g = 9.81. The control interval was 0.02s. We solve the dynamics using a fourth order Runge-Kutta method. We run each episode for 200 time steps, though the episode terminates once the cartpole has failed (defined as whenever |x| > 2.4m or |?| > 0.7rad). The reward function is ?2 for every time step after the failure occurs, 0 if the cartpole is balanced and satisfies |x| < 0.05, and ?1 otherwise. (iii) Mountain Car: The mountain car testbed [25] models a simulated car, which starts in a valley and must climb the hill to the right as quickly as possible. The task involves two states [x, x] ? and three policy parameters [K1 , K2 , ?]. Our control inputs for this problem are restricted to {?1, 1}. Our parameterized policy is given by ?(ut = 1|xt , x? t ) = P (K1 sign(x? t )x? 2t + K2 + t < xt ), where t ? N (0, ?). Our initial acceleration is f0 = +1; ft+1 = ut ft . The dynamics are given by x? t+1 = x? t + 0.001ft ? 0.0025 cos(3(xt ? 0.5)), and xt+1 = xt + x? t We run for 200 time steps, though the episode terminates once the mountaincar reaches its target at x = 1.0. The reward function is 0 if the car is at its target and ?1 otherwise. (iv) Acrobot: The acrobot [25] is a robot with 2 rotational links connected by an actuated motor. It has four states [?1 , ??1 , ?2 , ??2 ] and parameters K = [K1 , . . . , K8 , ?]. The acrobot is initialized to be close to [?, 0, 0, 0] (pointing straight up), and the goal is to keep the acrobot balanced upright for as long as possible. Our control input is drawn from the policy u ? N (Lx + K > ?(x), ?). Here L is the optimal LQR controller for acrobot linearized around the stationary point, and ?(x) = [(? ? ?1 )?2 , ??1 ??2 , (? ? ?1 )??1 , ?2 ??2 , (? ? ?1 )|? ? ?1 |, ??1 |??1 |, ?2 |?2 |, ??2 |??2 |]. The dynamics are ? 2 /d1 ?1 ??2 2 2 , d1 = m1 lc1 +m2 (l12 +lc2 +2l1 lc2 cos ?2 )+I1 +I2 , given by ??1 = ? d2 ?d21+?1 , ??2 = mu+d 2 2 2 l +I2 ?d /d1 c2 2 2 2 +l1 ?lc2 cos ?2 )+I2 , ?1 = ?m2 l1 lc2 ??2 ?sin(?2 )?2m2 l1 lc2 ??2 ??1 sin(?2 )+(m1 lc1 + d2 = m2 ?(lc2 m2 l1 )g cos(?1 ? ?/2) + ?2 , and ?2 = m2 + lc2 g cos(?1 + ?2 ? ?/2). Here m1 = 1, m2 = 1, l1 = 1, l2 = 2, lc1 = 0.5, lc2 = 1, I1 = 0.0833, I2 = 0.33, g = 9.81. The control interval was 0.02s. We solve the dynamics using a fourth order Runge-Kutta method. Each episode is run for 400 time steps, though the episode terminates once the acrobot has failed (defined as whenever the height of the second link t = ? cos(?1 ) ? cos(?1 + ?3 ) < 0.5). The reward function is ?2 for every time step after the failure occurs, and ?(1 ? (?cos(?1 ) ? cos(?1 + ?2 ))/2)2 otherwise. 8 We followed standard formulations of the control policy for LQR and cartpole. All policies are designed as functions of a linear combination of the policy parameters and hand-selected features. 8 References [1] J. Peters, S. Vijayakumar, and S. Schaal. Natural actor-critic. In Proceedings of the European Machine Learning Conference (ECML), 2005. [2] T. Mori, Y. Nakamura, M. Sato, and S. Ishii. Reinforcement learning for cpg-driven biped robot. In AAAI, 2004. [3] R. Tedrake, T. W. Zhang, and H.S. Seung. Learning to walk in 20 minutes. In Proceedings of the Fourteenth Yale Workshop on Adaptive and Learning Systems, 2005. [4] N. Kohl and P. Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion, 2004. [5] J. Zico Kolter and Andrew Y. Ng. Learning omnidirectional path following using dimensionality reduction. RSS, 2007. [6] P. Glynn. Likelihood Ratio Gradient Estimation: An Overview?. In Proceedings of the 1987 Winter Simulation Conference, Atlanta, GA, 1987. [7] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:23, 1992. [8] J. Baxter and P. Bartlett. Direct gradient-based reinforcement learning. Journal of Artificial Intelligence Research, 1999. [9] E. Greensmith, P. Bartlett, and J. Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 2004. [10] Leonid Peshkin and Christian R. Shelton. Learning from scarce experience. In Proceedings of the Nineteenth International Conference on Machine Learning, 2002. [11] R. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning. In NIPS 13, 2000. [12] J. Peters and S. Schaal. Policy gradient methods for robotics. In Proceedings of the IEEE International Conference on Intelligent Robotics Systems, 2006. [13] A. Kong, J. S. Liu, and W. H. Wong. Sequential imputations and Bayesian missing data problems. Journal of American Statistics Association, 89:278?288, 1994. [14] Jens Kober and Jan Peters. Policy search for motor primitives in robotics. NIPS, 2008. [15] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, 2004. [16] Richard S. Sutton. Dyna, an integrated architecture for learning, planning, and reacting, 1991. [17] Xi-Ren Cao. A basic formula for on-line policy-gradient algorithms. IEEE Transactions on Automatic Control, 50:696?699, 2005. [18] Jan Peters and Stefan Schaal. Reinforcement learning by reward-weighted regression for operational space control. In In: Proceedings of the International Conference on Machine Learning (ICML, pages 745?750, 2007. [19] Andrew Ng and Michael Jordan. Pegasus: A policy search method for large mdps and pomdps. In In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pages 406?415, 2000. [20] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10, 1998. [21] S. Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems, volume 14 of 26, 2001. [22] Nicolas Le Roux, Pierre-Antoine Manzagol, and Yoshua Bengio. Topmoumoute online natural gradient algorithm. NIPS, 2007. [23] Jan Peters. Machine Learning of Motor Skills for Robotics. PhD thesis, University of Southern California, 2007. [24] M. Riedmiller, J. Peters, and S. Schaal. Evaluation of policy gradient methods and variants on the cart-pole benchmark. In IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, 2007. [25] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 9
3922 |@word trial:26 kong:1 d2:2 pieter:1 simulation:3 linearized:1 r:1 incurs:1 recursively:2 reduction:4 initial:9 liu:1 tuned:1 lqr:6 rightmost:1 past:16 outperforms:3 existing:1 current:5 dx:1 must:1 readily:1 christian:1 motor:3 plot:5 designed:1 update:3 stationary:1 intelligence:2 fewer:2 selected:1 parameterization:2 es:14 provides:3 lx:2 cpg:1 zhang:1 height:1 along:2 c2:1 direct:1 become:1 symposium:1 consists:1 indeed:1 expected:24 themselves:3 planning:2 discounted:2 becomes:1 provided:2 estimating:7 notation:2 moreover:1 what:1 mountain:2 rmax:2 interpreted:1 informed:1 impractical:1 berkeley:3 every:3 demonstrates:2 k2:3 control:9 zico:1 greensmith:1 bertsekas:1 engineering:2 local:6 sutton:4 matrixvalued:1 reacting:1 path:4 might:1 suggests:1 challenging:1 co:12 limited:1 graduate:1 acknowledgment:1 recursive:6 practice:2 procedure:1 jan:4 episodic:4 area:1 riedmiller:1 empirical:2 get:1 cannot:2 close:1 valley:1 ga:1 context:1 applying:1 wong:1 optimize:2 equivalent:2 reviewer:1 missing:1 williams:1 primitive:1 simplicity:1 splitting:1 roux:1 m2:7 insight:1 examines:2 estimator:8 rule:5 variation:1 portance:1 qh:1 target:2 alleviating:1 programming:2 us:7 origin:1 locomotion:1 approximated:2 expensive:1 rec:2 labeled:2 ep:13 observed:1 ft:3 electrical:1 worst:1 region:7 connected:1 episode:6 removed:1 principled:1 intuition:1 environment:2 substantial:1 balanced:2 moderately:1 reward:10 mu:1 seung:1 dynamic:11 legged:1 depend:1 solving:1 singh:1 baseball:1 upon:1 uh:1 po:2 various:2 derivation:1 articulated:1 fast:1 describe:3 effective:3 artificial:2 hyper:1 quite:1 richer:1 widely:1 valued:1 larger:1 solve:2 nineteenth:1 otherwise:3 amari:1 tested:3 statistic:1 gp:3 itself:1 final:1 runge:2 online:1 sequence:1 gait:1 propose:1 product:1 kober:1 d21:1 cao:2 loop:1 rapidly:1 poorly:2 achieve:2 sixteenth:1 convergence:1 optimum:3 p:9 derive:1 develop:1 andrew:2 received:2 progress:1 eq:3 implemented:2 involves:1 indicate:2 direction:9 stochastic:2 enable:1 mcallester:1 require:2 abbeel:1 generalization:3 preliminary:2 decompose:1 hamid:1 anonymous:1 insert:1 sufficiently:1 considered:1 around:1 k3:1 mapping:1 pointing:1 vary:1 purpose:1 estimation:2 applicable:2 label:1 currently:1 sensitive:1 individually:1 largest:1 create:1 weighted:2 stefan:1 mit:1 always:1 rather:10 varying:1 barto:1 derived:2 focus:2 schaal:4 improvement:5 consistently:1 likelihood:25 indicates:1 greatly:1 contrast:2 ishii:1 baseline:72 dient:1 accumulated:1 typically:1 bt:1 integrated:1 going:1 i1:2 arg:1 issue:1 constrained:1 special:4 platform:1 equal:1 aware:1 testbeds:8 construct:1 once:3 sampling:29 ng:2 icml:1 future:1 minimized:2 connectionist:1 jectories:1 intelligent:1 richard:1 few:1 yoshua:1 randomly:1 winter:1 composed:1 national:1 unpromising:1 attempt:1 atlanta:1 interest:1 investigate:1 evaluation:2 sh:1 tuple:1 necessary:1 experience:9 cartpole:9 iv:1 initialized:1 re:1 plotted:4 walk:1 instance:1 earlier:1 logp:1 pole:3 entry:2 dod:1 successful:1 reported:1 eec:1 st:5 international:4 retain:1 preferring:1 vijayakumar:1 pool:1 michael:1 fused:1 quickly:2 thesis:1 central:1 ndseg:1 aaai:1 leveraged:3 choose:3 american:1 dimitri:1 return:26 kolter:1 mp:7 later:1 break:1 performed:1 doing:1 start:1 maintains:1 contribution:2 minimize:1 variance:24 efficiently:2 yield:3 gathered:2 climbing:1 bayesian:1 accurately:1 eters:1 none:1 mc:4 trajectory:3 ren:1 drive:1 pomdps:1 straight:1 reach:1 whenever:2 against:3 failure:2 pp:1 glynn:1 lc2:8 naturally:1 sampled:15 steps1:1 popular:1 ut:8 improves:1 dimensionality:2 car:4 back:1 higher:2 follow:1 formulation:4 done:1 though:5 just:2 varp:1 until:1 working:1 hand:3 nonlinear:1 quality:1 scientific:1 mdp:7 nac:3 building:1 effect:1 normalized:1 unbiased:11 true:1 hence:4 regularization:1 omnidirectional:1 i2:4 sin:5 criterion:1 generalized:9 stone:1 hill:2 outline:2 complete:1 performs:1 motion:1 l1:6 consideration:2 novel:5 common:1 physical:1 empirically:1 overview:1 reza:1 volume:1 extend:1 association:1 m1:3 significant:1 tuning:1 enjoyed:1 outlined:1 pm:13 hp:1 automatic:1 biped:1 pq:2 moving:1 robot:4 access:1 longer:1 surface:2 gj:3 glo:1 add:3 actor:2 f0:1 own:1 recent:3 perspective:1 optimizing:2 driven:1 wellknown:1 resamplings:1 outperforming:2 continue:1 success:1 life:1 jens:1 seen:1 minimum:10 additional:2 employed:1 maximize:1 ii:5 relates:1 u0:1 multiple:1 full:1 ing:1 match:1 faster:1 long:1 equally:2 award:1 variant:1 regression:2 basic:1 controller:1 expectation:7 iteration:4 robotics:7 addition:3 fellowship:1 interval:2 crucial:1 biased:1 ascent:1 induced:1 cart:2 gpomdp:13 spirit:1 effectiveness:2 climb:1 jordan:1 leverage:2 iii:2 enough:1 baxter:2 bengio:1 variety:2 affect:2 fit:2 architecture:1 restrict:1 reduce:3 inner:1 idea:1 peshkin:6 expression:3 defense:2 bartlett:2 reuse:2 penalty:1 peter:7 action:7 jie:2 tune:1 amount:3 discount:1 ph:2 induces:1 outperform:3 nsf:1 sign:1 estimated:3 per:1 track:1 mat:2 express:1 four:2 quadrupedal:1 drawn:6 imputation:1 k4:1 grab:1 sum:1 run:12 fourteenth:1 parameterized:3 fourth:2 uncertainty:1 throughout:1 decision:3 appendix:2 comparable:1 bound:2 followed:1 yale:1 kohl:1 sato:1 k8:1 constrain:1 u1:1 department:2 according:1 imb:1 combination:1 describes:2 terminates:4 across:2 em:2 wi:1 appealing:1 kakade:1 making:2 s1:1 explained:1 dlog:1 invariant:1 gathering:1 restricted:1 computationally:1 equation:8 mori:1 dyna:2 end:3 generalizes:2 available:2 apply:1 away:2 appropriate:1 pierre:1 alternative:2 original:1 running:2 include:1 remaining:2 opportunity:1 exploit:1 k1:5 especially:1 ek2:1 tensor:5 objective:2 move:1 pegasus:2 quantity:3 added:1 occurs:2 parametric:1 costly:1 traditional:1 antoine:1 southern:1 gradient:73 kutta:2 thank:1 reinforce:10 simulated:2 link:2 athena:1 outer:1 considers:1 l12:1 index:1 relationship:1 manzagol:1 ratio:26 rotational:1 difficult:1 setup:1 implementation:1 policy:107 perform:1 allowing:1 observation:3 markov:4 benchmark:1 finite:4 descent:1 ecml:1 extended:1 rn:1 mansour:1 stack:1 arbitrary:1 maei:1 required:3 connection:4 optimized:1 rad:1 california:2 testbed:4 nip:3 able:5 suggested:1 bar:1 biasing:1 program:1 max:2 memory:9 natural:9 difficulty:1 regularized:1 nakamura:1 scarce:1 arm:1 scheme:1 improve:3 mdps:2 axis:2 naive:1 prior:2 literature:1 l2:1 highlight:1 var:1 pabbeel:1 versus:1 humanoid:1 proxy:2 s0:2 consistent:1 critic:2 balancing:1 repeat:1 supported:2 free:1 keeping:1 taking:2 benefit:1 feedback:1 depth:1 world:2 transition:2 computes:1 forward:1 commonly:3 reinforcement:12 adaptive:2 author:1 far:1 transaction:1 approximate:1 skill:1 keep:1 global:7 robotic:2 consuming:1 xi:1 search:40 learn:1 nicolas:1 ca:1 actuated:1 operational:1 improving:2 complex:1 european:1 constructing:1 domain:2 substituted:1 main:1 allowed:1 fails:1 experienced:1 candidate:1 topmoumoute:1 tang:2 theorem:2 down:1 minute:1 formula:1 xt:5 showing:1 insightful:1 list:1 r2:1 workshop:1 sequential:1 importance:39 phd:1 acrobot:8 horizon:4 backtracking:1 simply:1 forming:1 failed:2 hitting:1 scalar:6 tedrake:1 applies:1 lstd:1 satisfies:1 mountaincar:3 conditional:1 goal:2 acceleration:1 leonid:1 change:1 specifically:1 infinite:1 except:1 upright:2 acting:2 averaging:1 called:1 experimental:4 lc1:3 armijo:3 overload:1 evaluate:2 d1:3 shelton:6 correlated:1
3,227
3,923
Self-Paced Learning for Latent Variable Models M. Pawan Kumar Benjamin Packer Daphne Koller Computer Science Department Stanford University {pawan,bpacker,koller}@cs.stanford.edu Abstract Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that often we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition. 1 Introduction Latent variable models provide an elegant formulation for several applications of machine learning. For example, in computer vision, we may have many ?car? images from which we wish to learn a ?car? model. However, the exact location of the cars may be unknown and can be modeled as latent variables. In medical diagnosis, learning to diagnose a disease based on symptoms can be improved by treating unknown or unobserved diseases as latent variables (to deal with confounding factors). Learning the parameters of a latent variable model often requires solving a non-convex optimization problem. Some common approaches for obtaining an approximate solution include the well-known EM [8] and CCCP algorithms [9, 23, 24]. However, these approaches are prone to getting stuck in a bad local minimum with high training and generalization error. Machine learning literature is filled with scenarios in which one is required to solve a non-convex optimization task, for example learning perceptrons or deep belief nets. A common approach for avoiding a bad local minimum in these cases is to use multiple runs with random initializations and pick the best solution amongst them (as determined, for example, by testing on a validation set). However, this approach is adhoc and computationally expensive as one may be required to use several runs to obtain an accurate solution. Bengio et al. [3] recently proposed an alternative method for training with non-convex objectives, called curriculum learning. The idea is inspired by the way children are taught: start with easier concepts (for example, recognizing objects in simple scenes where an object is clearly visible) and build up to more complex ones (for example, cluttered images with occlusions). Curriculum learning suggests using the easy samples first and gradually introducing the learning algorithm to more complex ones. The main challenge in using the curriculum learning strategy is that it requires the identification of easy and hard samples in a given training dataset. However, in many real-world applications, such a ranking of training samples may be onerous or conceptually difficult for a human to provide ? even if this additional human supervision can be provided, what is intuitively ?easy? for a human may not match what is easy for the algorithm in the feature and hypothesis space employed for the given application. To alleviate these deficiencies, we introduce self-paced learning. In the context of human education, self-paced learning refers to a system where the curriculum is determined by the pupil?s abilities rather than being fixed by a teacher. We build on this intuition for learning latent variable models by 1 designing an iterative approach that simultaneously selects easy samples and updates the parameters at each iteration. The number of samples selected at each iteration is determined by a weight that is gradually annealed such that later iterations introduce more samples. The algorithm converges when all samples have been considered and the objective function cannot be improved further. Note that, in self-paced learning, the characterization of what is ?easy? applies not to individual samples, but to sets of samples; a set of samples is easy if it admits a good fit in the model space. We empirically demonstrate that our self-paced learning approach outperforms the state of the art algorithm for learning a recently proposed latent variable model, called latent structural SVM, on four standard machine learning applications using publicly available datasets. 2 Related Work Self-paced learning is related to curriculum learning in that both regimes suggest processing the samples in a meaningful order. Bengio et al. [3] noted that curriculum learning can be seen as a type of continuation method [1]. However, in their work, they circumvented the challenge of obtaining such an ordering by using datasets where there is a clear distinction between easy and hard samples (for example, classifying equilateral triangles vs. squares is easier than classifying general triangles vs. general quadrilaterals). Such datasets are rarely available in real world applications, so it is not surprising that the experiments in [3] were mostly restricted to small toy examples. Our approach also has a similar flavor to active learning, which chooses a sample to learn from at each iteration. Active learning approaches differ in their sample selection criteria. For example, Tong and Koller [21] suggest choosing a sample that is close to the margin (a ?hard? sample), corresponding to anti-curriculum learning. Cohn et al. [6] advocate the use of the most uncertain sample with respect to the current classifier. However, unlike our setting, in active learning the labels of all the samples are not known when the samples are chosen. Another related learning regime is co-training, which works by alternately training classifiers such that the most confidently labeled samples from one classifier are used to train the other [5, 17]. Our approach differs from co-training in that in our setting the latent variables are simply used to assist in predicting the target labels, which are always observed, whereas co-training deals with a semi-supervised setting in which some labels are missing. 3 Preliminaries We will denote the training data as D = {(xi , yi ), ? ? ? , (xn , yn )}, where xi ? X are the observed variables (which we refer to as input) for the ith sample and yi ? Y are the unobserved variables (which we refer to as output), whose values are known during training. In addition, latent variable models also contain latent, or hidden, variables that we denote by hi ? H. For example, when learning a ?car? model using image-level labels, x represents an image, the binary output y indicates the presence or absence of a car in the image, and h represents the car?s bounding box (if present). Given the training data, the parameters w of a latent variable model are learned by optimizing an objective function, for example by maximizing the likelihood of D or minimizing the risk over D. Typically, the learning algorithm proceeds iteratively, with each iteration consisting of two stages: (i) the hidden variables are either imputed or marginalized to obtain an estimate of the objective function that only depends on w; and (ii) the estimate of the objective function is optimized to obtain a new set of parameters. We briefly describe two such well-known algorithms below. EM Algorithm for Likelihood Maximization. An intuitive objective is to maximize likelihood: ! X X X log Pr(hi |xi , yi ; w) . (1) log Pr(xi , yi , hi ; w) ? log Pr(xi , yi ; w) = max max w i w i i A common approach for this task is to use the EM method [8] or one of its many variants [12]. Outlined in Algorithm 1, EM iterates between finding the expected value of the latent variables h and maximizing objective (1) subject to this expectation. We refer the reader to [8] for more details. CCCP Algorithm for Risk Minimization. Given the true output y, we denote the user-specified ? (w) as ?(y, y ? (w)). The risk is usually highly non-convex in w, and therefore risk of predicting y very difficult to minimize. An efficient way to overcome this difficulty is to use the recently proposed latent structural support vector machine (hereby referred to as latent SSVM) formulation [9, 23] that minimizes a regularized upper bound on the risk. Latent SSVM provides a linear prediction rule of 2 Algorithm 1 The EM algorithm for parameter estimation by likelihood maximization. input D = {(x1 , y1 ), ? ? ? , (xn , yn )}, w0 , ?. 1: t ? 0 2: repeat 3: Obtain the expectation of objective (1) under the distribution Pr(hi |xi , yi ; wt ). 4: Update wt+1 by maximizing the expectation of objective (1). Specifically, P wt+1 = argmaxw i Pr(hi |xi , yi ; wt ) log Pr(xi , yi , hi ; w). 5: t ? t + 1. 6: until Objective function cannot be increased above tolerance ?. the form fw (x) = argmaxy?Y,h?H w? ?(x, y, h). Here, ?(x, y, h) is the joint feature vector. For instance, in our ?car? model learning example, the joint feature vector can be modeled as the HOG [7] descriptor extracted using pixels in the bounding box h. The parameters w are learned by solving the following optimization problem: n 1 CX ?i , ||w||2 + w,?i ?0 2 n i=1   ? i ) ? ?(yi , y ?i, h ? i ) ? ?i , max w? ?(xi , yi , hi ) ? ?(xi , y min s.t. hi ?H ? i ? H, i = 1, ? ? ? , n. ?? yi ? Y, ?h (2) ? i (w)) (where For any given w, the value of ?i can be shown to be an upper bound on the risk ?(yi , y ? i (w); that is, it can ? i (w) is the predicted output given w). The risk function can also depend on h y ? i (w)). We refer the reader to [23] for more details. ? i (w), h be of the form ?(yi , y Problem (2) can be viewed as minimizing the sum of a convex and a concave function. This observation leads to a concave-convex procedure (CCCP) [24] outlined in Algorithm 2, which has been shown to converge to a local minimum or saddle point solution [19]. The algorithm has two main steps: (i) imputing the hidden variables (step 3), which corresponds to approximating the concave function by a linear upper bound; and (ii) updating the value of the parameter using the values of the hidden variables. Note that updating the parameters requires us to solve a convex SSVM learning problem (where the output yi is now concatenated with the hidden variable h?i ) for which several efficient algorithms exist in the literature [14, 20, 22]. Algorithm 2 The CCCP algorithm for parameter estimation of latent SSVM. input D = {(x1 , y1 ), ? ? ? , (xn , yn )}, w0 , ?. 1: t ? 0 2: repeat 3: Update h?i = argmaxhi ?H wt? ?(xi , yi , hi ). 4: Update wt+1 by fixing the hidden variables for output yi to h?i and solving the corresponding SSVM problem. Specifically, P ? i ) ? ?(xi , yi , h? ))}. ? i ) + w? (?(xi , y ?i, h wt+1 = argminw 21 ||w||2 + C i i max{0, ?(yi , y n 5: t ? t + 1. 6: until Objective function cannot be decreased below tolerance ?. 4 Self-Paced Learning for Latent Variable Models Our self-paced learning strategy alleviates the main difficulty of curriculum learning, namely the lack of a readily computable measure of the easiness of a sample. In the context of a latent variable model, for a given parameter w, this easiness can be defined in two ways: (i) a sample is easy if we are confident about the value of a hidden variable; or (ii) a sample is easy if it is easy to predict its true output. The two definitions are somewhat related: if we are more certain about the hidden variable, we may be more certain about the prediction. They are different in that certainty does not imply correctness, and the hidden variables may not be directly relevant to what makes the output of a sample easy to predict. We therefore focus on the second definition: easy samples are ones whose correct output can be predicted easily (its likelihood is high, or it lies far from the margin). 3 In the above argument, we have assumed a given w. However, in order to operationalize selfpaced learning, we need a strategy for simultaneously selecting the easy samples and learning the parameter w at each iteration. To this end, we note that the parameter update involves optimizing an objective function that depends on w (for example, see step 4 of both Algorithms 1 and 2). That is, ! n X f (xi , yi ; w) , (3) wt+1 = argmin r(w) + w?Rd i=1 where r(.) is a regularization function and f (.) is the negative log-likelihood for EM or an upper bound on the risk for latent SSVM (or any other criteria for parameter learning). We now modify the above optimization problem by introducing binary variables vi that indicate whether the ith sample is easy or not. Only easy samples contribute to the objective function. Formally, at each iteration we solve the following mixed-integer program: ! n n X 1 X vi . (4) vi f (xi , yi ; w) ? r(w) + (wt+1 , vt+1 ) = argmin K i=1 w?Rd ,v?{0,1}n i=1 K is a weight that determines the number of samples to be considered: if K is large, the problem prefers to consider only ?easy? samples with a small value of f (.) (high likelihood, or far from the margin). Importantly, however, the samples are tied together in the objective through the parameter w. Therefore, no sample is considered independently easy; rather, a set of samples is easy if a w can be fit to it such that the corresponding values of f (.) are small. We iteratively decrease the value of K in order to estimate the parameters of a latent variable model via self-paced learning. As K approaches 0, more samples are included until problem (4) reduces to problem (3). We thus begin with only a few easy examples, gradually introducing more until the entire training dataset is used. To optimize problem (4), we note that it can be relaxed such that each variable vi is allowed to take any value in the interval [0, 1]. This relaxation is tight; that is, for any value of w an optimum value of vi is either 0 or 1 for all samples. If f (xi , yi ; w) < 1/K then vi = 1 yields the optimal objective function value. Similarly, if f (xi , yi ; w) > 1/K then the objective is optimal when vi = 0. Relaxing problem (4) allows us to identify special cases where the optimum parameter update can be found efficiently. One such special case is when r(.) and f (.) are convex in w, as in the latent SSVM parameter update. In this case, the relaxation of problem (4) is a biconvex optimization problem. Recall that a biconvex problem is one where the variables z can be divided into two sets z1 and z2 such that for a fixed value of each set, the optimal value of the other set can be obtained by solving a convex optimization problem. In our case, the two sets of variables are w and v. Biconvex problems have a vast literature, with both global [11] and local [2] optimization techniques. In this work, we use alternative convex search (ACS) [2], which alternatively optimizes w and v while keeping the other set of variables fixed. We found in our experiments that ACS obtained accurate results. Even in the general case with non-convex r(.) and/or f (.), we can use the alternative search strategy to efficiently obtain an approximate solution for problem (4). Given parameters w, we can obtain the optimum v as vi = ?(f (xi , yi ; w) < 1/K), where ?(.) is the indicator function. For a fixed v, problem (4) has the same form as problem (3). Thus, the optimization for self-paced learning is as easy (or as difficult) as the original parameter learning algorithm. Self-Paced Learning for Latent SSVM. As an illustrative example of self-paced learning, Algorithm 3 outlines the overall self-paced learning method for latent SSVM, which involves solving a modified version of problem (2). At each iteration, the weight K is reduced by a factor of ? > 1, introducing more and more (difficult) samples from one iteration to the next. The algorithm converges when it considers all samples but is unable to decrease the latent SSVM objective function value below the tolerance ?. We note that self-paced learning provides the same guarantees as CCCP: Property: Algorithm 3 converges to a local minimum or saddle point solution of problem (2). This follows from the fact that the last iteration of Algorithm 3 is the original CCCP algorithm. Our algorithm requires an initial parameter w0 (similar to CCCP). In our experiments, we obtained an estimate of w0 by initially setting vi = 1 for all samples and running the original CCCP algorithm for a fixed, small number of iterations T0 . As our results indicate, this simple strategy was sufficient to obtain an accurate set of parameters using self-paced learning. 5 Experiments We now demonstrate the efficacy of self-paced learning in the context of latent SSVM. We show that our approach outperforms the state of the art CCCP algorithm on four standard machine learning 4 Algorithm 3 The self-paced learning algorithm for parameter estimation of latent SSVM. input D = {(x1 , y1 ), ? ? ? , (xn , yn )}, w0 , K0 , ?. 1: t ? 0, K ? K0 . 2: repeat 3: Update h?i = argmaxhi ?H wt? ?(xi , yi , hi ). Pn 4: Update wt+1 by using ACS to minimize the objective 12 ||w||2 + C i=1 vi ?i ? n subject to the constraints of problem (2) as well as v ? {0, 1}n . 5: t ? t + 1, K ? K/?. 6: until vi = 1, ?i and the objective function cannot be decreased below tolerance ?. 1 K Pn i=1 vi (a) (b) (c) Figure 1: Results for the noun phrase coreference experiment. Top: MITRE score. Bottom: Pairwise score. (a) The relative objective value computed as (objcccp ?objspl )/objcccp , where objcccp and objspl are the objective values of CCCP and self-paced learning respectively. A green circle indicates a significant improvement (greater than tolerance C?), while a red circle indicates a significant decline. The black dashed line demarcates equal objective values. (b) Loss over the training data. Minimum MITRE loss: 14.48 and 14.02 for CCCP and selfpaced learning respectively; Minimum pairwise loss: 31.10 and 31.03. (c) Loss over the test data. Minimum MITRE loss: 15.38 and 14.91; Minimum pairwise loss: 34.10 and 33.93. applications. In all our experiments, the initial weight K0 is set such that the first iteration selects more than half the samples (as there are typically more easy samples than difficult ones). The weight is reduced by a factor ? = 1.3 at each iteration and the parameters are initialized using T0 = 2 iterations of the original CCCP algorithm. 5.1 Noun Phrase Coreference Problem Formulation. Given the occurrence of all the nouns in a document, the goal of noun phrase coreference is to provide a clustering of the nouns such that each cluster refers to a single object. This task was formulated within the SSVM framework in [10] and extended to include latent variables in [23]. Formally, the input vector x consists of the pairwise features xij suggested in [16] between all pairs of noun phrases i and j in the document. The output y represents a clustering of the nouns. A hidden variable h specifies a forest over the nouns such that each tree in the forest consists of all the nouns of one cluster. Imputing the hidden variables involves finding the maximum spanning forest (which can be solved by Kruskal or Prims algorithm). Similar to [23], we employ two different loss functions, corresponding to the pairwise and MITRE scores. Dataset. We use the publicly available MUC6 noun phrase coreference dataset, which consists of 60 documents. We use the same split of 30 training and 30 test documents as [23]. Results. We tested CCCP and our self-paced learning method on different values of C; the average training times over all 40 experiments (20 different values of C and two different loss functions) for the two methods were 1183 and 1080 seconds respectively. Fig. 1 compares the two methods in terms of the value of the objective function (which is the main focus of this work), the loss over the training data and the loss over the test data. Note that self-paced learning significantly improves the objective function value in 11 of the 40 experiments (compared to only once when CCCP outperforms self-paced learning; see Fig. 1(a)). It also provides a better training and testing loss for both MITRE and pairwise scores when using the optimal value of C (see Fig. 1(b)-(c)). 5.2 Motif Finding Problem Formulation. We consider the problem of binary classification of DNA sequences, which was cast as a latent SSVM in [23]. Specifically, the input vector x consists of a DNA sequence of length l (where each element of the sequence is a nucleotide of type A, G, T or C) and the output space Y = {+1, ?1}. In our experiments, the classes correspond to two different types of genes: 5 those that bind to a protein of interest with high affinity and those that do not. The positive sequences are assumed to contain particular patterns, called motifs, of length m that are believed to be useful for classification. However, the starting position of the motif within a gene sequence is often not known. Hence, this position is treated as the hidden variable h. For this problem, we use the joint feature vector suggested by [23]. Here, imputing the hidden variables simply involves a search for the starting position of the motif. The loss function ? is the standard 0-1 classification loss. Dataset. We use the publicly available UniProbe dataset [4] that provides positive and negative DNA sequences for 177 proteins. For this work, we chose five proteins at random. The total number of sequences per protein is roughly 40, 000. For all the sequences, the motif length m is known (provided with the UniProbe dataset) and the background Markov model is assumed to be of order k = 3. In order to specify a classification task for a particular protein, we randomly split the sequences into roughly 50% for training and 50% for testing. CCCP SPL CCCP SPL CCCP SPL (a) Objective function value 106.50 ? 0.38 94.00 ? 0.53 106.60 ? 0.30 93.51 ? 0.29 (b) Training error (%) 27.10 ? 0.44 32.03 ? 0.31 26.90 ? 0.28 26.94 ? 0.26 32.04 ? 0.23 26.81 ? 0.19 (c) Test error (%) 27.10 ? 0.36 32.15 ? 0.31 27.10 ? 0.37 27.08 ? 0.38 32.24 ? 0.25 27.03 ? 0.13 92.77 ? 0.99 92.37 ? 0.65 116.63 ? 18.78 107.18 ? 1.48 75.51 ? 1.97 74.23 ? 0.59 34.89 ? 8.53 30.31 ? 1.14 20.09 ? 0.81 19.52 ? 0.34 35.42 ? 8.19 30.84 ? 1.38 20.25 ? 0.65 19.65 ? 0.39 Table 1: Mean and standard deviations for the motif finding experiments using the original CCCP algorithm (top row) and the proposed self-paced learning approach (bottom row). The better mean value is highlighted in bold. Note that self-paced learning provides an improved objective value (the primary concern of this work) for all proteins. The improvement in objective value also translates to an improvement in training and test errors. Results. We used five different folds for each protein, randomly initializing the motif positions for all training samples using four different seed values (fixed for both methods). We report results for each method using the best seed (chosen according to the value of the objective function). For all experiments we use C = 150 and ? = 0.001 (the large size of the dataset made cross-validation highly time consuming). The average time over all 100 runs for CCCP and self-paced learning are 824 and 1287 seconds respectively. Although our approach is slower than CCCP for this application, as table 1 shows, it learns a better set of parameters. While improvements for most folds are small, for the fourth protein, CCCP gets stuck in a bad local minimum despite using multiple random initializations (this is indicated by the large mean and standard deviation values). This behavior is to be expected: in many cases, the objective function landscape is such that CCCP avoids local optima; but in some cases, CCCP gets stuck in poor local optimum. Indeed, over all the 100 runs (5 proteins, 5 folds and 4 seed values) CCCP got stuck in a bad local minimum 18 times (where a bad local minimum is one that gave 50% test error) compared to 1 run where self-paced learning got stuck. Fig. 2 shows the average Hamming distance between the motifs of the selected samples at each iteration of the self-paced learning algorithm. Note that initially the algorithm selects samples whose motifs have a low Hamming distance (which intuitively correspond to the easy samples for this application). It then gradually introduces more difficult samples (as indicated by the rise in the average Hamming distance). Finally, it considers all samples and attempts to find the most discriminative motif across the entire dataset. Note that the motifs found over the entire dataset using self-paced learning provide a smaller average Hamming distance than those found using the original CCCP algorithm, indicating a greater coherence for the resulting output. 5.3 Handwritten Digit Recognition Problem Formulation. Handwritten digit recognition is a special case of multi-label classification, and hence can be formulated within the SSVM framework. Specifically, given an input vector x, which consists of m grayscale values that represent an image of a handwritten digit, our aim is to predict the digit. In other words, Y = {0, 1, ? ? ? , 9}. It is well-known that the accuracy of digit recognition can be greatly improved by explicitly modeling the deformations present in each image, for example see [18]. For simplicity, we assume that the deformations are restricted to an arbitrary rotation of the image, where the angle of rotation is not known beforehand. This angle (which takes a value from a finite discrete set) is modeled as the hidden variable h. We specify the joint feature vector as ?(x, y, h) = (0y(m+1) ; ?h (x) 1; 0(9?y)(m+1) ), where ?h (x) is the vector representation 6 Figure 2: Average Hamming distance between the motifs found in all selected samples at each iteration. Our approach starts with easy samples (small Hamming distance) and gradually introduces more difficult samples (large Hamming distance) until it starts to consider all samples of the training set. The figure shows results for three different protein-fold pairs. The average Hamming distance (over all proteins and folds) of the motifs obtained at convergence are 0.6155 for CCCP and 0.6099 for self-paced learning. Figure 3: Four digit pairs from MNIST: 1-7, 2-7, 3-8, 8-9. Relative objective is computed as in Fig. 1. Positive values indicate superior results for self-paced learning. The dotted black lines delineate where the difference is greater than the convergence criteria range (C?); differences outside this range are highlighted in blue. of the image x rotated by the angle corresponding to h. In other words, the joint feature vector is the rotated image of the digit which is padded in the front and back with the appropriate number of zeroes. Imputing the hidden variables simply involves a search over a discrete set of angles. Similar to the motif finding experiment, we use the standard 0-1 classification loss. Dataset. We use the standard MNIST dataset [15], which represents each handwritten digit as a vector of length 784 (that is, an image of size 28 ? 28). For efficiency, we use PCA to reduce the dimensionality of each sample to 10. We perform binary classification on four difficult digit pairs (1-7, 2-7, 3-8, and 8-9), as in [25]. The training standard dataset size for each digit ranges from 5, 851 to 6, 742, and the test sets range from 974 to 1, 135 digits. The rotation modeled by the hidden variable can take one of 11 discrete values, evenly spaced between ?60 and 60 degrees. Results. For each digit pair, we use C values ranging from 25 to 300, set ? = 0.001, and set 4 K = 10 C . Modeling rotation as a hidden variable significantly improves classification performance, allowing the images to be better aligned with each other. Across all experiments for both learning methods, using hidden variables achieves better test error; the improvement over using no hidden variables is 12%, 8%, 11%, and 22%, respectively, for the four digit pairs. CCCP learning took an average of 18 minutes across all runs, while self-paced learning took an average of 53 minutes. The above figure compares the training and test errors and objective values between CCCP and selfpaced learning. Self-paced learning achieves significantly better values in 15 runs, and is worse in 4 runs, demonstrating that it helps find better solutions to the optimization problems. Though training and test errors do not necessarily correlate to objective values, the best test error across C values is better for self-paced learning for one of the digit pairs (1-7), and is the same for the others. 5.4 Object Localization Problem Formulation. Given a set of images along with labels that indicate the presence of a particular object category in the image (for example, a mammal), our goal is to learn discriminative object models for all object categories (that is, models that can distinguish between one object, say bison, from another, say elephant). In practice, although it is easy to mine such images from free photo-sharing websites such as Flickr, it is burdensome to obtain ground truth annotations of the exact location of the object in each image. To avoid requiring these human annotations, we model the location of objects as hidden variables. Formally, for a given image x, category y and location h, the score is modelled as wT ?(x, y, h) = wyT ?h (x), where wy are the parameters that corresponds to the class y and ?h (?) is the HOG [7, 9] feature extracted from the image at position h (the size of the object is assumed to be the same for all images ? a reasonable assumption for our datasets). For 7 Figure 4: The top row shows the imputed bounding boxes of an easy and a hard image using the CCCP algorithm over increasing iterations (left to right). Note that for the hard (deer) image, the bounding box obtained at convergence does not localize the object accurately. In contrast, the self-paced learning approach (bottom row) does not use the hard image during initial iterations (indicated by the red color of the bounding box). In subsequent iterations, it is able to impute accurate bounding boxes for both the easy and hard image. the above problem, imputing the hidden variables involves a simple search over possible locations ? ) is again the standard 0-1 classification loss. in a given image. The loss function ?(y, y Dataset. We use images of 6 different mammals (approximately 45 images per mammal) that have been previously employed for object localization [13]. We split the images of each category into approximately 90% for training and 10% for testing. Results. We use five different folds to compare our method with the state of the art CCCP algorithm. For each fold, we randomly initialized the location of the object in each image (the initialization was the same for the two methods). We used a value of C = 10 and ? = 0.001. The average training time over all folds were 362 seconds and 482 seconds for CCCP and self-paced learning respectively. Table 2 shows the mean and standard deviation of three terms: the objective value, the training loss and the testing loss. Self-paced learning provided a significantly lower (more than tolerance) objective value than CCCP for all folds. The better objective value resulted in a substantial improvement in the training (for 4 folds) and testing loss (an improvement of approximately 4% for achieved for 2 folds). In these experiments, CCCP never outperformed self-paced learning for any of the three measures of performance. Objective 4.70 ? 0.11 Train Loss (%) 0.33 ? 0.18 Test Loss (%) 16.92 ? 5.16 Objective 4.53 ? 0.15 Train Loss (%) 0.0 ? 0.0 Test Loss (%) 15.38 ? 3.85 Table 2: Results for the object localization experiment. Left: CCCP. Right: Self-paced learning. Note that self-paced learning provides better results for all measures of performance. Fig. 4 shows the imputed bounding boxes for two images during various iterations of the two algorithms. The proposed self-paced learning algorithm does not use the hard image during the initial iterations (as indicated by the red bounding box). In contrast, CCCP considers all images at each iteration. Note that self-paced learning provides a more accurate bounding box for the hard image at convergence, thereby illustrating the importance of learning in a meaningful order. In our experience, this was a typical behavior of the two algorithms. 6 Discussion We proposed the self-paced learning regime in the context of parameter estimation for latent variable models. Our method works by iteratively solving a biconvex optimization problem that simultaneously selects easy samples and updates the parameters. Using four standard datasets from disparate domains (natural language processing, computational biology and computer vision) we showed that our method outperforms the state of the art approach. In the current work, we solve the biconvex optimization problem using an alternate convex search strategy, which only provides us with a local minimum solution. Although our results indicate that such a strategy is more accurate than the state of the art, it is worth noting that the biconvex problem can also be solved using a global optimization procedure, for example the one described in [11]. This is a valuable direction for future work. We are also currently investigating the benefits of selfpaced learning on other computer vision applications, where the ability to handle large and rapidly growing weakly supervised data is fundamental to the success of the field. Acknowledgements. This work is supported by NSF under grant IIS 0917151, MURI contract N000140710747, and the Boeing company. 8 References [1] E. Allgower and K. Georg. Numerical continuation methods: An introduction. SpringerVerlag, 1990. [2] M. Bazaraa, H. Sherali, and C. Shetty. Nonlinear Programming - Theory and Algorithms. John Wiley and Sons, Inc., 1993. [3] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009. [4] M. Berger, G. Badis, A. Gehrke, and S. Talukder et al. Variation in homeodomain DNA binding revealed by high-resolution analysis of sequence preferences. Cell, 27, 2008. [5] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 98. [6] D. Cohn, Z. Ghahramani, and M. Jordan. Active learning with statistical models. JAIR, 4:129? 145, 1996. [7] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [8] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of Royal Statistical Society, 39(1):1?38, 1977. [9] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In CVPR, 2008. [10] T. Finley and T. Joachims. Supervised clustering with support vector machines. In ICML, 2005. [11] C. Floudas and V. Visweswaran. Primal-relaxed dual global optimization approach. Journal of Optimization Theory and Applications, 78(2):187?225, 1993. [12] A. Gelman, J. Carlin, H. Stern, and D. Rubin. Bayesian Data Analysis. Chapman and Hall, 1995. [13] G. Heitz, G. Elidan, B. Packer, and D. Koller. Shape-based object localization for descriptive classification. IJCV, 2009. [14] T. Joachims, T. Finley, and C.-N. Yu. Cutting-plane training for structural SVMs. Machine Learning, 77(1):27?59, 2009. [15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. [16] V. Ng and C. Cardie. Improving machine learning approaches to coreference resolution. In ACL, 2002. [17] K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In CIKM, 2000. [18] P. Simard, B. Victorri, Y. LeCun, and J. Denker. Tangent Prop - a formalism for specifying selected invariances in adaptive network. In NIPS, 1991. [19] B. Sriperumbudur and G. Lanckriet. On the convergence of concave-convex procedure. In NIPS Workshop on Optimization for Machine Learning, 2009. [20] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003. [21] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. JMLR, 2:45?66, 2001. [22] I. Tsochantaridis, T. Hofmann, Y. Altun, and T. Joachims. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004. [23] C.-N. Yu and T. Joachims. Learning structural SVMs with latent variables. In ICML, 2009. [24] A. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15, 2003. [25] K. Zhang, I. Tsang, and J. Kwok. Maximum margin clustering made practical. In ICML, 2007. 9
3923 |@word illustrating:1 version:1 briefly:1 dalal:1 triggs:1 pick:1 mammal:3 thereby:1 initial:4 efficacy:1 selecting:1 score:5 sherali:1 document:5 outperforms:5 quadrilateral:1 current:2 z2:1 surprising:1 readily:2 john:1 visible:1 subsequent:1 numerical:1 shape:1 hofmann:1 treating:1 update:10 v:2 half:1 selected:5 website:1 plane:1 ith:2 characterization:1 iterates:1 provides:8 location:6 contribute:1 preference:1 daphne:1 five:3 zhang:1 along:1 consists:5 ijcv:1 advocate:1 introduce:2 pairwise:6 indeed:1 expected:2 behavior:2 roughly:2 growing:1 multi:1 inspired:1 company:1 considering:1 increasing:1 provided:4 begin:1 what:4 argmin:2 minimizes:1 proposing:1 finding:6 unobserved:2 guarantee:1 certainty:1 concave:5 classifier:3 ramanan:1 medical:1 grant:1 yn:4 positive:3 local:12 modify:1 bind:1 despite:1 analyzing:1 bazaraa:1 approximately:3 black:2 chose:1 acl:1 initialization:3 suggests:1 relaxing:1 specifying:1 co:5 range:4 practical:1 lecun:2 testing:6 practice:1 differs:1 digit:15 procedure:4 floudas:1 significantly:4 got:2 word:2 refers:2 suggest:2 protein:11 get:2 cannot:4 close:1 selection:1 unlabeled:1 gelman:1 tsochantaridis:1 context:4 risk:8 optimize:1 missing:1 maximizing:3 annealed:2 starting:2 cluttered:1 convex:14 independently:1 resolution:2 simplicity:1 rule:1 importantly:1 handle:1 variation:1 target:1 user:1 exact:2 programming:1 designing:1 hypothesis:1 lanckriet:1 element:1 recognition:5 expensive:1 updating:2 muri:1 labeled:2 observed:2 bottom:3 taskar:1 solved:2 initializing:1 tsang:1 ordering:1 decrease:2 valuable:1 disease:2 benjamin:1 intuition:2 substantial:1 dempster:1 mine:1 trained:1 depend:1 solving:6 tight:1 weakly:1 coreference:6 yuille:1 localization:5 efficiency:1 triangle:2 easily:1 joint:5 k0:3 various:1 equilateral:1 train:3 describe:1 deer:1 choosing:1 outside:1 whose:3 stanford:2 solve:4 cvpr:2 say:2 elephant:1 ability:2 highlighted:2 laird:1 sequence:10 descriptive:1 net:1 took:2 argminw:1 relevant:1 aligned:1 combining:1 rapidly:1 alleviates:1 deformable:1 intuitive:1 getting:2 convergence:5 cluster:2 optimum:6 rangarajan:1 converges:3 rotated:2 object:17 help:1 ac:3 fixing:1 allgower:1 c:1 predicted:2 involves:6 indicate:5 differ:1 direction:1 wyt:1 correct:1 human:6 mcallester:1 education:1 generalization:1 alleviate:2 preliminary:1 considered:4 ground:1 hall:1 seed:3 predict:3 kruskal:1 achieves:2 estimation:4 outperformed:1 label:6 currently:1 correctness:1 gehrke:1 tool:1 minimization:1 clearly:1 always:1 aim:1 modified:1 rather:3 pn:2 avoid:1 focus:2 joachim:4 improvement:7 indicates:3 likelihood:8 greatly:1 contrast:2 burdensome:1 motif:15 entire:4 typically:2 initially:2 hidden:21 koller:6 selects:5 pixel:1 issue:1 overall:1 classification:11 colt:1 dual:1 art:6 noun:11 special:3 equal:1 once:1 never:1 field:1 ng:1 chapman:1 biology:1 represents:4 yu:2 icml:5 future:1 report:1 others:1 few:1 employ:1 randomly:3 oriented:1 packer:2 simultaneously:5 resulted:1 individual:1 pawan:2 occlusion:1 consisting:1 attempt:1 detection:1 interest:1 highly:2 introduces:2 argmaxy:1 primal:1 accurate:6 beforehand:1 experience:1 nucleotide:1 filled:1 tree:1 incomplete:1 initialized:2 circle:2 deformation:2 uncertain:1 visweswaran:1 formalism:1 increased:1 instance:1 modeling:2 maximization:2 phrase:6 applicability:1 introducing:4 addressing:1 deviation:3 recognizing:1 front:1 teacher:1 chooses:1 confident:1 fundamental:1 contract:1 together:1 again:1 worse:1 simard:1 selfpaced:4 toy:1 bold:1 inc:1 explicitly:1 ranking:1 depends:2 vi:12 collobert:1 later:1 diagnose:1 red:3 start:3 annotation:2 minimize:2 square:1 publicly:3 accuracy:1 descriptor:1 efficiently:2 yield:1 identify:1 correspond:2 landscape:1 spaced:1 conceptually:1 modelled:1 handwritten:5 identification:1 bayesian:1 accurately:1 cardie:1 worth:1 flickr:1 sharing:1 definition:2 sriperumbudur:1 hereby:1 hamming:8 dataset:14 mitchell:1 recall:1 color:1 car:7 improves:2 dimensionality:1 back:1 jair:1 supervised:3 specify:2 improved:4 formulation:6 box:9 symptom:1 delineate:1 though:1 stage:1 until:7 cohn:2 nonlinear:1 multiscale:1 lack:1 indicated:4 concept:1 contain:2 true:2 requiring:1 regularization:1 hence:2 iteratively:3 deal:2 during:4 self:43 impute:1 noted:1 illustrative:1 biconvex:6 criterion:3 outline:1 demonstrate:3 image:32 ranging:1 novel:1 recently:3 common:3 imputing:5 rotation:4 superior:1 empirically:2 altun:1 refer:4 significant:2 rd:2 outlined:2 similarly:1 language:1 supervision:1 showed:1 confounding:1 optimizing:2 optimizes:1 scenario:1 certain:2 binary:4 success:1 vt:1 yi:24 seen:1 minimum:12 additional:1 somewhat:1 relaxed:2 greater:3 employed:2 guestrin:1 converge:1 maximize:1 elidan:1 dashed:1 semi:1 ii:4 multiple:2 reduces:1 match:1 believed:1 cross:1 divided:1 cccp:35 prediction:2 variant:1 vision:3 expectation:3 iteration:23 represent:1 histogram:1 achieved:1 cell:1 whereas:1 addition:1 background:1 decreased:2 interval:1 victorri:1 unlike:1 subject:2 elegant:1 facilitates:1 effectiveness:1 jordan:1 integer:1 structural:5 presence:2 noting:1 revealed:1 bengio:4 easy:30 split:3 fit:2 gave:1 carlin:1 reduce:1 idea:1 decline:1 haffner:1 computable:2 translates:1 t0:2 whether:1 pca:1 assist:1 prefers:1 deep:1 ssvm:15 useful:1 clear:1 svms:2 category:4 dna:4 imputed:3 continuation:2 reduced:2 specifies:1 exist:1 xij:1 nsf:1 dotted:1 cikm:1 per:2 blue:1 diagnosis:1 discrete:3 georg:1 taught:1 four:8 easiness:3 demonstrating:1 blum:1 localize:1 vast:1 relaxation:2 padded:1 sum:1 run:8 angle:4 powerful:1 fourth:1 reader:2 spl:3 reasonable:1 coherence:1 bound:4 hi:10 paced:43 distinguish:1 fold:11 constraint:1 deficiency:1 scene:1 argument:1 min:1 kumar:1 circumvented:1 department:1 structured:1 according:1 alternate:1 poor:1 across:4 smaller:1 em:7 son:1 argmaxw:1 intuitively:2 gradually:5 restricted:2 pr:6 computationally:1 previously:1 end:1 photo:1 available:4 denker:1 kwok:1 appropriate:1 occurrence:1 alternative:3 shetty:1 slower:1 original:6 top:3 running:1 include:2 clustering:4 marginalized:1 concatenated:1 ghahramani:1 build:3 approximating:1 society:1 objective:37 strategy:7 primary:1 amongst:1 affinity:1 gradient:2 distance:8 unable:1 w0:5 evenly:1 considers:3 spanning:1 length:4 modeled:4 berger:1 minimizing:2 difficult:8 mostly:1 hog:2 negative:2 rise:1 disparate:1 boeing:1 stern:1 unknown:2 perform:1 allowing:1 upper:4 observation:1 datasets:5 markov:2 finite:1 anti:1 extended:1 y1:3 arbitrary:1 namely:1 required:2 specified:1 pair:7 optimized:1 adhoc:1 z1:1 cast:1 distinction:1 learned:2 nip:3 alternately:1 address:1 able:1 suggested:2 proceeds:1 below:4 usually:1 pattern:1 wy:1 regime:3 challenge:3 confidently:1 program:1 max:5 green:1 royal:1 belief:1 difficulty:2 treated:1 regularized:1 predicting:2 indicator:1 curriculum:9 natural:1 imply:1 finley:2 text:1 literature:3 acknowledgement:1 tangent:1 interdependent:1 relative:2 loss:23 discriminatively:1 mixed:1 validation:2 degree:1 sufficient:1 rubin:2 classifying:2 row:4 prone:2 repeat:3 last:1 keeping:1 free:1 supported:1 felzenszwalb:1 tolerance:6 benefit:1 overcome:1 heitz:1 xn:4 world:2 avoids:1 stuck:6 made:2 adaptive:1 far:2 correlate:1 approximate:2 cutting:1 gene:2 global:3 active:5 investigating:1 assumed:4 consuming:1 xi:19 discriminative:2 alternatively:1 grayscale:1 search:6 latent:34 iterative:2 onerous:1 table:4 n000140710747:1 learn:3 obtaining:2 nigam:1 forest:3 improving:1 bottou:1 complex:2 mitre:5 necessarily:1 domain:1 louradour:1 main:5 bounding:9 child:1 allowed:1 ghani:1 x1:3 fig:6 referred:1 pupil:1 tong:2 wiley:1 position:5 wish:1 lie:1 governed:1 tied:1 jmlr:1 learns:2 minute:2 bad:6 operationalize:1 svm:2 admits:1 concern:1 workshop:1 mnist:2 importance:1 margin:5 easier:2 flavor:1 cx:1 bison:1 simply:3 saddle:2 applies:1 binding:1 corresponds:2 truth:1 determines:1 extracted:2 prop:1 weston:1 viewed:1 goal:2 formulated:2 absence:1 hard:9 fw:1 included:1 determined:4 specifically:4 typical:1 springerverlag:1 wt:12 called:3 total:1 invariance:1 meaningful:3 perceptrons:1 rarely:1 formally:3 indicating:1 support:4 tested:1 avoiding:1
3,228
3,924
Fast Large-scale Mixture Modeling with Component-specific Data Partitions Bo Thiesson? Microsoft Research Chong Wang?? Princeton University Abstract Remarkably easy implementation and guaranteed convergence has made the EM algorithm one of the most used algorithms for mixture modeling. On the downside, the E-step is linear in both the sample size and the number of mixture components, making it impractical for large-scale data. Based on the variational EM framework, we propose a fast alternative that uses component-specific data partitions to obtain a sub-linear E-step in sample size, while the algorithm still maintains provable convergence. Our approach builds on previous work, but is significantly faster and scales much better in the number of mixture components. We demonstrate this speedup by experiments on large-scale synthetic and real data. 1 Introduction Probabilistic mixture modeling [7] has been widely used for density estimation and clustering applications. The Expectation-Maximization (EM) algorithm [4, 11] is one of the most used methods for this task for clear reasons ? elegant formulation of an iterative procedure, ease of implementation, and guaranteed monotone convergence for the objective. On the other hand, the EM algorithm also has some acknowledged shortcomings. In particular, the E-step is linear in both the number of data points and the number of mixture components, and therefore computationally impractical for large-scale applications. Our work was motivated by a large-scale geo-spatial problem, demanding a mixture model of a customer base (a huge number of data points) for competing businesses (a large number mixture components), as the basis for site evaluation (where to locate a new store). Several approximation schemes for EM have been proposed to address the scalability problem, e.g. [2, 12, 14, 10, 17, 16] , to mention a few. Besides [17, 16], none of these variants has both an E-step that is truly sub-linear in sample size and also enjoys provable convergence for a well-defined objective function. More details are discussed in Section 5. Our work is inspired by the ?chunky EM? algorithm in [17, 16], a smart application of the variational EM framework [11], where a lower bound on the objective function increases at each iteration and convergence is guaranteed. An E-step in standard EM calculates expected sufficient statistics under mixture-component membership probabilities calculated for each individual data point given the most recent model estimate. The variational EM framework alters the E-step to use sufficient statistics calculated under a variational distribution instead. In chunky EM, the speedup is obtained by using a variational distribution with shared (variational) membership probabilities for blocks of data (in an exhaustive partition for the entire data into non-overlapping blocks of data). The chunky EM starts from a coarse partition of the data and gradually refines the partition until convergence. However, chunky EM does not scale well in the number of components, since all components share the same partition. The individual components are different ? in order to obtain membership probabilities of appropriate quality, one component may need fine-grained blocks in one area of the data space, while another component is perfectly fine with coarse blocks in that area. Chunky EM expands the shared partition to match the needed granularity for the most demanding mixture component in any area of the data space, which might unnecessarily increase the computational *Equal contributors. ?Work done during internship at Microsoft Research. 1 cost. Here, we derive a principled variation, called component-specific EM (CS-EM) that allows component-specific partitions. We demonstrate a significant performance improvement over standard and chunky EM for experiments on synthetic and mentioned customer-business data. 2 Background: Variational and Chunky EM Variational EM. Given a set of i.i.d. data x , {x1 , ? ? ? , xN }, we are interested in estimating the parameters ? = {?1:K , ?1:K } in the K-component mixture model with log-likelihood function P P L(?) = n log k p(xn |?k )?k . (1) For this task, we consider a variational generalization [11] of standard EM [4], which maximizes a lower bound of L(?) through the introduction of a variational distribution Q q. We assume that the variational distribution factorizes in accordance with data points, i.e, q = n qn , where each qn is an arbitrary discrete distribution over mixture components k = 1, . . . , K. We can lower bound L(?) by multiplying each p(xn |?k )?k in (1) with qqnn (k) (k) and apply Jensen?s inequality to get P P L(?) ? n k qn (k)[log p(xn |?k )?k ? log qn (k)] (2) P = L(?) ? n KL (qn ||p(?|xn , ?)) , F(?, q), (3) where p(?|xn , ?) defines the posterior distribution of membership probabilities and KL(q||p) is the Kullback-Leibler (KL) divergence between q and p. The variational EM algorithm alternates the following two steps, i.e. coordinate ascent on F(?, q), until convergence. E-step: q t+1 = arg maxq F(?t , q), M-step: ?t+1 = arg max? F(?, q t+1 ). Q If q is not restricted in any form, the E-step produces q t+1 = n p(?|xn , ?t ), because the KLdivergence is the only term in (3) depending on q. The variational EM is in this case equivalent to the standard EM, and hence produces the maximum likelihood (ML) estimate. In the following, we consider certain ways of restricting q to attain speedup over standard EM, implying that the minimum KL-divergence between qn and p(?|xn , ?) is not necessarily zero. Still the variational EM defines a convergent algorithm, which instead optimizes a lower bound of the log-likelihood. Chunky EM. The chunky EM algorithm [17, 16] falls intoQ the framework of variational EM algorithms. In chunky EM, the variational distribution q = n qn is restricted according to a partition into exhaustive and mutually exclusive blocks of the data. For a given partition, if data points xi and xj are in the same block, then qi = qj . The intuition is that data points in the same block are somewhat similar and can be treated in the same way, which leads to computational savings in the E-step. If M is the number of blocks in a given partition, the E-step for chunky EM has cost O(KM ) whereas in standard EM the cost is O(KN ). The speedup can be tremendous for M  N . The speedup is gained by a trade-off between the tightness of the lower bound for the log-likelihood and the restrictiveness of constraints. Chunky EM starts from a coarse partition and iteratively refines it. This refinement process always produces a tighter bound, since restrictions on the variational distribution are gradually relaxed. The chunky EM algorithm stops when refining any block in a partition will not significantly increase the lower bound. 3 Component-specific EM In chunky EM, all mixture components share the same data partition. However, for a particular block of data, the variation in membership probabilities differs across components, resulting in varying differences from the equality constrained variational probabilities. Roughly, the variation in membership probabilities is greatest for components closer to a block of data, and, in particular, for components far away the membership probabilities are all so small that the variation is insignificant. This intuition suggests that we might gain a computational speedup, if we create component-specific data partitions, where a component pays more attention to nearby data (fine-grained blocks) than data far away (coarser blocks). Let Mk beP the number of data blocks in the partition for component k. The complexity for the E-step is then O( k Mk ), compared to O(KM P) in chunky EM. Our conjecture is that we can lower bound the log-likelihood equally well with k Mk significantly smaller than KM , resulting in a much faster E-step. Since our model maintains different partitions for different mixture components, we call it the component-specific EM algorithm (CS-EM). 2 1 + e c + e c d a d b {5} g f + nents with individual tree-consistent partitions (B1 -B5 ) indicated by the black nodes. The bottom-right figure is the corresponding MPT, where {?} indicates the component marks and a, b, c, d, e, f, g enumerate all the marked nodes. This MPT encodes all the component-specific information for the 5 mixtures. + f 5 4 a Figure 1: Trees 1-5 represent 5 mixture compo- 3 2 g = b {1,2} e {3,4} a f {3,4} b c d {1,2} {3,4} {1,2} Main Algorithm. Figure 2 (on p. 6) shows the main flow of CS-EM. Starting from a coarse partition for each component (see Section 4.1 for examples), CS-EM runs variational EM to convergence and then selectively refine the component-specific partitions. This process continues until further refinements will not significantly improve the lower bound. Sections 3.1-3.5 provide a detailed description of basic concepts in support of this brief outline for the main structure of the algorithm. 3.1 Marked Partition Trees It is convenient to organize the data into a pre-computed partition tree, where a node in the tree represents the union of the data represented by its children. Individual data points are not actually stored in each node, but rather, the sufficient statistics necessary for our estimation operations are pre-computed and stored here. (We discuss these statistics in Section 3.3.) Any hierarchical decomposition of data that ensures some degree of similarity between data in a block is suitable for constructing a partition tree. We exemplify our work by using KD-trees [9]. Creating a KD-tree and storing the sufficient statistics in its nodes has cost O(N log N ), where N is the number of data point. We will in the following consider tree-consistent partitions, where each data block in a partition corresponds to exactly one node for a cut (possibly across different levels) in the tree?see Figure 1. Let us now define a marked partition tree (MPT), a simple encoding of all component-specific partitions, as follows. Let Bk be the data partition (a set of blocks) in the tree-consistent partition for mixture component k. In Figure 1, for example, B1 is the partition into data blocks associated with nodes {e, c, d}. In the shared data partition tree used to generate the component-specific partitions, we mark the corresponding nodes for the data blocks in each Bk by the component identifier k. Each node v in the tree will in this way contain a (possibly empty) set of component marks, denoted by Kv . The MPT is now the subtree obtained by pruning all unmarked nodes without marked descendants from the tree. Figure 1 shows an example of a MPT. This example is special in the sense that all nodes in the MPT are marked. In general, a MPT may have unmarked nodes at any location above the leaves. For example, in chunky EM, the component-specific partitions are the same for each mixture component. In this case, only the leaves in the MPT are marked, with each leaf marked by all mixture components. The following important property for a MPT holds since all component-specific partitions are constructed with respect to the same data partition tree. Property 1. Let T denote a MPT. The marked nodes on a path from leaf to root in T mark exactly one data block from each of the K component-specific data partitions. In the following, it becomes important to identify the data block in a component-specific partition, which embeds the block defined by a leaf. Let L denote the set of leaves in T , and let BL denote a partition with data blocks Bl ? BL according to these leaves. We let Bk(l) denote the specific Bk ? Bk with the property that Bl ? Bk . Property 1 ensures that Bk(l) exists for all l, k. Example: In Figure 1, the path a ? e ? g in turn marks the components Ka = {3, 4}, Ke = {1, 2}, and Kg = {5} and we see that each component is marked exactly once on this path, as stated in Property 1. Accordingly, for the leaf a, (B3(a) = B4(a) ) ? (B1(a) = B2(a) ) ? B5(a) . 2 3.2 The Variational Distribution Our variational distribution q assigns the same variational membership probability to mixture component k for all data points in a component-specific block Bk ? Bk . That is, qn (k) = qBk for all xn ? Bk , (4) which we denote as the component-specific block constraint. Unlike chunky EM, we do not assume that the data partition Bk is the same across different mixture components. The extra flexibility complicates the estimation of q in the E-step. This is the central challenge of our algorithm. 3 To further drive intuition behind the E-step complication, let us make the sum-to-one constraint for the P variational distributions qn (?) explicit. That is, k qn (k) = 1 for all data points n, which according to the above block constraint and using Property 1 can be reformulated as the |L| constraints P (5) k qBk(l) = 1 for all l ? L. Notice that since qBk can be associated with an internal node in T it may be the case that qBk(l) represent the same qBk across different constraints in (5). In fact, qBk(l) = qBk for all l ? {l ? L|Bl ? Bk }, (6) implying that the constraints in (5) are intertwined according to the nested structure given by T . The closer a data block Bk is to the root of T the more constraints simultaneously involve the same qBk . Example: Consider the MPT in Figure 1. Here, qB5(a) = qB5(b) = qB5(c) = qB5(d) , and hence the density for component 5 is the same across all four sum-to-one constraints. Similarly, qB1(a) = qB1(b) , so the density is the same for component 1 in the two constraints associated with leaves a and b. 2 3.3 Efficient Variational E-step Accounting for the component-specific block constraint in (4), the lower bound, F(?, q), in Eq. (2) can be expressed as a sum of local parts, F(?, qBk ), as follows P P P P F(?, q) = k Bk ?Bk |Bk | qBk (gBk + log ?k ? log qBk ) = k Bk ?Bk F(?, qBk ), (7) where we have defined the block-specific geometric mean P gBk = hlog p(x|?k )iBk = x?Bk log p(x|?k )/|Bk |. (8) We integrate the sum-to-one constraints in (5) into the lower bound in (7) by using the standard principle of Lagrange duality (see, e.g., [1]). Accordingly, we construct the Lagrangian P P P P F(?, q, ?) = k Bk F(?, qBk ) + l ?l ( k qBk(l) ? 1), where ? , {?1 , . . . , ?L } are the Lagrange multipliers for the constraints in Eq. (5). Recall the relationship between qBk and qBk(l) in (6). By setting ?F(?, q, ?)/?qBk = 0, we obtain   P qBk (?) = exp (1/|Bk |) l:Bl ?Bk ?l ? 1 ?k exp (gBk ) . (9) Solving the dual optimization problem ?? = arg min? F(?, q(?), ?) now leads to the primal solution ? given by qB = qBk (?? ).1 k P For chunky EM, the E-step is straightforward, because Bk(l) = Bl and therefore l:Bl ?Bk(l) ?l = ?l for all k = 1, . . . , K. Substituting (9) into the sum-to-one constraints in (5) reveals that each ?l can be solved independently, leading to the following closed-form solution for qBk(l)  P ? ??l = |Bl | 1 + log k ?k exp(gBk(l) ) , qB = ?k exp(gBk(l) )/Z, (10) k(l) P where Z = k ?k exp(gBk(l) ) is a normalizing constant. CS-EM does not enjoy a similar simple optimization, because of the intertwined constraints, as described in Section 3.2. Fortunately, we can still obtain a closed-form solution. Essentially, we use the nesting structure of the constraints to reduce Lagrange multipliers from the solution one at a time until only one is left, in which case the optimization is easily solved. We describe the basic approach here and defer the technical details (and pseudo-code) to the supplement. Consider a leaf node l ? L and recall that Kl denotes the components with Bk(l) = Bl in their partitions. The sum-to-one constraint in (5) that is associated with leaf l can therefore be written as P P k?Kl qBk(l) + k6?Kl qBk(l) = 1. Furthermore, for all k ? Kl the qBk(l) , as defined in (9), is a function of the same ?l . Accordingly, P P ql , k?Kl qBk(l) = exp (?l /|Bl | ? 1) k?Kl ?k exp(gBk(l) ). (11) 1 Notice that Eq. (9) implies that positivity constraints qn (k) ? 0 are automatically satisfied during estimation. 4 Now, consider l?s leaf-node sibling, l0 . For example, in Figure 1, node l = a and l0 = b. The two leaves share the same path from their parent to the root in T . Hence, using Property 1, it must be the case that Bk(l) = Bk(l0 ) for k 6? Kl . The two sum-to-one constraints?one for each leaf?therefore imply that ql = ql0 . Using (11), it now follows that P P ?l0 = |Bl0 |(?l /|Bl | + log k?Kl ?k exp(gBk(l) ) ? log k?Kl0 ?k0 exp(gBk(l0 ) )) , f (?l ). Thus, we can replace ?l0 with f (?l ) in all qBk expressions. Further analysis (detailed in the supplement) shows how we more efficiently account for this parameter reduction and continue the process, now considering the parent node a new ?leaf? node once all children have been processed. When reaching the root, every qBk expression on the path from l only involves the single ?l , and the optimal ??l can therefore be found analytically by solving the corresponding sum-to-one constraint in (5). ? Following, all optimal qB are found by inserting ??l into the reduced qBk expressions. k Finally, it is important to notice that gBk is the only data-dependent part in the above E-step solution. It is therefore key to the computational efficiency of the CS-EM algorithm that gBk can be calculated from pre-computed statistics, which is in fact the case for the large class of exponential family distributions. These are the statistics that are stored in the nodes of the MPT. Example: Let p(x|?k ) be an exponential family distribution p(x|?k ) = h(x) exp(?kT T (x) ? A(?k )), (12) where ?k is the natural parameter, h(x) is the reference function, T (x) is the sufficient statistic, and A(?k ) is the normalizing constant. Then gBk = hlog h(x)iBk + ?kT hT (x)iBk ? A(?k ), where hlog h(x)iBk and hT (x)iBk are the statistics that we pre-compute for (8). In particular, if p(x|?k ) = Nd (?k , ?k ), a Gaussian distribution, then  ?1 T ?1 1 ?k , h(x) = 1, T (x) = (x, xxT ), ?k = (?k ??1 k , ??k /2), A(?k ) = ? 2 d log(2?)+log |?k |+?k ? and the statistics hlog h(x)iBk = 0 and hT (x)iBk = (hxiBk , hxxT iBk ) can be pre-computed. 3.4 2 Efficient Variational M-step In the variational M-step theP model parameters ? = {?1:K , ?1:K } are updated by maximizing Eq. (7) w.r.t. ? under the constraint k ?k = 1. Hereby, the update is P P ?k ? Bk ?Bk |Bk |qBk , ?k = arg max?k Bk ?Bk |Bk |qBk gBk . (13) Thus, the M-step can be efficiently computed using the pre-computed sufficient statistics as well. Example: If p(x|?k ) has the exponential family form in Eq. (12), ?k is obtained by solving P P P ?k = arg max?k ( Bk ?Bk qBk x?Bk T (x))?k ? ( Bk ?Bk |Bk |qBk )A(?k ). In particular, if p(x|?k ) = Nd (?k , ?k ), then P P ?k = ( Bk ?Bk |Bk |qBk hxiBk )/ (N ?k ) , ?k = ( Bk ?Bk |Bk |qBk hxxT iBk ? ?k ?Tk )/ (N ?k ) . 2 3.5 Efficient Variational R-step Given the current component-specific data partitions, as marked in the MPT T , a refining step (R-step) selectively refines these partitions. Any refinement enlarges the family of variational distributions, and therefore always tightens the optimal lower bound for the log-likelihood. We define a refinement unit as the refinement of one data block in the current partition for one component in the model. The efficiency of CS-EM is affected by the number of refinement units performed at each R-step. With too few units we spend too much time on refining, and with too many units some of the refinements may be far from optimal and therefore unnecessarily slow down the algorithm. We have empirically found K refinement units at each R-step to be a good choice. This introduces K new free variational parameters, which is similar to a refinement step in chunky EM. However, chunky EM refines the same data block across all components, which is not the case in CS-EM. 5 Figure 2: The CS-EM algorithm. Figure 3: Variational R-step algorithm. 1: Initialization: build KD-tree, set initial MPT, set initial ?, run E-step to set q, set t, s = 0, compute Ft , Fs using (7). 2: repeat 3: repeat 4: Run variational E-step and M-step. 5: Set t ? t + 1 and compute Ft using (7). 6: until (Ft ? Ft?1 )/(Ft ? F0 ) < 10?4 . 7: Run variational R-step. 8: Set s ? s + 1 and Fs = Ft . 9: until (Fs ? Fs?1 )/(Fs ? F0 ) < 10?4 . 1: Initialize priority queue Q favoring high ?Fv,k values. 2: for each marked node v in T do 3: Compute q via E-step with constraints as in (14). 4: for all k ? Kv do 5: Insert candidate (v, k) into Q according to ?Fv,k . 6: end for 7: end for 8: Select K top-ranked (v, k) in Q for refinement. Ideally, an R-step should select the refinement units leading to optimal improvement for F. Good candidates can be found by performing a single E-step for each candidate and thenPselect the units that improve F the most. This demands the evaluation of an E-step for each of the k Mk possible refinement units. Exact evaluation for this many full E-steps is prohibitively expensive, and we therefore instead approximate these refinement-guiding E-steps by a local computation scheme based on the intuition that refining a block for a specific component mostly affects components with similar local partition structures. The algorithm is described in Figure 3 with details as follows. Consider moving all component-marks for v ? T to its children ch(v), where each child u ? ch(v) ?v, K ? u denote the set of marks at v, u ? T? . receives a copy. Let T? denote the altered MPT, and K ? v = ? and K ? u = Ku ? Kv . To approximate the new variational distribution q?, we fix the Hence, K ? u and l ? L, to the value obtained for the distribution q before the value for each q?Bk(l) , with k 6? K refinement. In this case, the sum-to-one constraints for q? simplifies as P ?Bk(l) + Rl = 1 for all l ? L, (14) ?u q k?K P P with Rl = 1 ? k?K? u qBk(l) being the fixed values. Notice that k?K? u qBk(l) = 0 for any leaf l ? u and any leaf l under u. The not under u, and that qBk(l) = qBk(u) and q?Bk(l) = q?Bk(u) for k ? K constraints in (14) therefore reduces to the following |ch(v)| independent constraints P ?Bk(u) + Ru = 1 for all u ? ch(v). ?u q k?K ? u now has a local closed form solution similar to (10)?with Z = P ? q?B +Ru . Each q?Bk(u) , k ? K k(u) k?Ku The improvement to F that is achieved by the refinement-guiding E-step for the refinement unit refining data block v for component k is denoted ?Fv,k , and can be computed as P ?Fv,k = u?ch(v) F(?, q?Bk(u) ) ? F(?, qBk(v) ). This improvement is computed for all possible refinement units and the K highest scoring units are then selected in the R-step. Notice that this selective refinement step will most likely not refine the same data block for all components and therefore creates component-specific partitions. Example: In Figure 1, node e and its children {a, b} are marked Ke = {1, 2} and Ka = Kb = {3, 4}. ? e = ? and K ?a = K ? b = {1, 2, 3, 4}. For the two candidate refinement units associated with e, we have K With q5(u) held fixed, we will for each child u ? {a, b} optimize q?Bk(u) , k = 1, 2, 3, 4, and following (e, 1) and (e, 2) are inserted into the priority queue of candidates according to their ?Fv,k values. 2 4 Experiments In this section we provide a systematic evaluation of CS-EM, chunky EM, and standard EM on synthetic data, as well as a comparison between CS-EM and chunky EM on the business-customer data, mentioned in Section 1. (Standard EM is too slow to be included in the latter experiment.) 4.1 Experimental setup For the synthetic experiments, we generated random training and test data sets from Gaussian mixture models (GMMs) by varying one (in a single case two) of the following default settings: #data points N = 100, 000, #mixture components K = 40, #dimensions d = 2, and c-separation2 c = 2. 2 A GMM is c-separated [3], if for any i 6= j, f (i, j) , ||?i ? ?j ||2 / max(?max (?i ), ?max (?j )) ? dc2 , where ?max (?) denotes the maximum eigenvalue of ?. We only require that Median [f (i, j)] ? dc2 . 6 The (proprietary) business-customer data was obtained through collaboration with PitneyBowes Inc. and Yellowpages.com LLC. For the experiments on this data, N = 6.5 million and d = 2, corresponding to the latitude and longitude for potential customers in Washington state. The basic assumption is that potential customers act as rational consumers and frequent the somewhat closest business locations to purchase a good or service. The locations for competing stores of a particular type, in this way, correspond to fixed centers for components in a mixture model. (A less naive model with the penetration level for a good or service and the relative attractiveness for stores, is the object of related research, but is not important for the computational feasibility studied here.) The synthetic experiments are initialized as follows. After constructing KD-tree, the first tree-level containing at least K nodes (dlog2 Ke) is used as the initial data partition for both chunky EM and all components in CS-EM. For all algorithms (including standard EM), we randomly chose K data blocks from the initial partition and initialized parameters for the individual mixture components accordingly. Mixture weights are initialized with a uniform distribution. The experiments on the business-customer data are initialized in the same way, except that the component centers are fixed and the initial data blocks that cover these centers are used for initializing the remaining parameters. For CS-EM we also considered an alternative initialization of data partitions, which better matches the rationale behind component-specific partitions. It starts from the CS-EM initialization and recursively, according to the KD-tree structure, merges two data blocks in a component-specific partition, if the merge has little effect on that component.3 We name this variant as CS-EM? . 4.2 Results For the synthetic experiments, we compared the run-times for the competing algorithms to reach a parameter estimate of same quality (and therefore similar clustering performance not counting different local maxima), defined as follows. We recorded the log-likelihood for the test data at each iteration of the EM algorithm, and before each S-step in chunky EM and the CS-EM. We ran all algorithms to convergence at level 10?4 , and the test log-likelihood for the algorithm with lowest value was chosen as baseline.4 We now recorded the run-time for each algorithm to reach this baseline, and computed the EM-speedup factors for chunky EM, CS-EM, and CS-EM? , each defined as the standard EM run-time divided by the run-time for the alternative algorithm. We repeated all experiments with five different parameter initializations and report the averaged results. Figure 4 shows the EM-speedups for the synthetic data. First of all, we see that both P CS-EM and CSEM? are significantly faster than chunky EM in all experiments. In general, the k Mk variational parameters needed for the CS-EM algorithms is far fewer than the KM parameters needed for chunky EM in order to reach P an estimate of same quality. For example, for the default experimental setting, the ratio KM/ k Mk is 2.0 and 2.1 for, respectively, CS-EM and CS-EM? . We also see that there is no significant difference in speedup between CS-EM and CS-EM? . This observation can be explained by the fact that the resulting component-specific data partitions greatly refine the initial partitions, and any computational speedup due to the smarter initial partition in CS-EM? is therefore overwhelmed. Hence, a simple initial partition, as in CS-EM, is sufficient. Finally, similar to results already reported for chunky EM in [17, 16], we see for all of chunky EM, CS-EM, and CS-EM? that the number of data points and the amount of c-separation have a positive effect on EM-speedup, while the number of dimensions and the number of components have a negative effect. However, the last plot in Figure 4 reveals an important difference between chunky EM and CS-EM: with a fixed ratio between number of data points and number of clusters, the EM-speedup declines a lot for chunky EM, as the number of clusters and data points increases. This observation is important for the business-customer data, where increasing the area of investigation (from city to county to state to country) has this characteristic for the data. In the second experiment on the business-customer data, standard EM is computationally too demanding. For example, for the ?Nail salon? example in Figure 5, a single EM iteration takes about 5 hours. In contrast, CS-EM runs to convergence in 20 minutes. To compare run-times for chunky 3 Let ? and ? be the mean and variance parameter for an initial component, and ?p , ?l , and ?r denote the sample mean for data in the considered parent, left and right child. We merge if |M D(?l , ?|?)/M D(?p , ?|?)? 1| < 0.05 and |M D(?r , ?|?)/M D(?p , ?|?) ? 1| < 0.05, where M D(?, ?|?) is the Mahalanobis distance. 4 For the default experimental setting, for example, the baseline is reached at 96% of the log-likelihood improvement from initialization to standard EM convergence. 7 Figure 4: EM-speedup factors on synthetic data. Figure 5: A comparison of run-time and final number of variational parameters for Chunky EM vs. CS-EM for exemplary business types with different number of stores (mixture components). Business type #stores Bowling Dry cleaning Nail salon Pizza Tax filing Conv. store 129 815 1290 1327 1459 1739 time ratio 5.0 21.2 35.8 33.0 34.8 29.4 parameter ratio 2.41 2.81 3.51 3.18 3.41 3.42 EM and CS-EM, we therefore slightly modified the way we ensure that the two algorithm reach a parameter estimate of same quality. We use the lowest of the F values (on training data) obtained for the two algorithms at convergence as the baseline, and record the time for each algorithm to reach this baseline. Figure 5 shows the speedup (time ratio) and the reduction in number of variational parameters (parameter ratio) for CS-EM compared to chunky EM, as evaluated on exemplary types of businesses. Again, CS-EM is significantly faster than chunky EM and the speedup is achieved by a better targeting of variational distribution through the component-specific partitions. 5 Related and Future Work Related work. CS-EM combines the best from two major directions in the literature regarding speedup of EM for mixture modeling. The first direction is based on powerful heuristic ideas, but without provable convergence due to the lack of a well-defined objective function. The work in [10] is a prominent example, where KD-tree partitions were first used for speeding up EM. As also pointed out in [17, 16], the method will likely?but not provably?converge for fine-grained partitions. In contrast, CS-EM is provable convergent?even for arbitrary rough partitions, if extreme speedup is needed. The granularity of partitions in [10] is controlled by a user-specified threshold on the minimum and maximum membership probabilities that are reachable within the boundaries of a node in the KD-tree. In contrast, we have almost no tuning parameters. We instead let the data speak by itself by having the final convergence determine the granularity of partitions. Finally, [10] ?prunes? a component (sets the membership probability to zero) for data far away from the component. It relates to our component-specific partitions, but ours is more principled with convergence guarantees. The second direction of speedup approaches are based on the variational EM framework [11]. In [11], a ?sparse? EM was presented, which at some iterations, only updates part of the parameters and hence relates it to the pruning idea in [10]. [14] presents an ?incremental? and a ?lazy? EM, which gain speedup by performing E-steps on varying subsets of the data rather than the entire data. All three methods guarantee convergence. However, they need to periodically perform an E-step over the entire data, and, in contrast to CS-EM, their E-step is therefore not truly sub-linear in sample size, making them potentially unsuitable for large-scale applications. The chunky EM in [17, 16] is the approach most similar to our CS-EM. Both are based on the variational EM framework and therefore guarantees convergence, but CS-EM is faster and scales better in the number of clusters. In addition, heuristic sub-sampling is common practice when faced with a large amount of data. One could argue that chunky EM is an intelligent sub-sampling method, where 1) instead of sampled data points it uses geometric averages for blocks of data in a given data partition, and 2) it automatically chooses the ?sampling size? by a learning curve method, where F is used to measure the utility of increasing the granularity for the partition. Sub-sampling therefore has same computational complexity as chunky EM, and our results therefore suggest that we should expect CS-EM to be much faster than sub-sampling and scale better in the number of mixture components. Finally, we exemplified our work by using KD-trees as the tree-consistent partition structure for generating the component-specific partitions in CS-EM, which limited its effectiveness in high dimensions. However, any hierarchical partition structure can be used, and the work in [8] therefore suggest that changing to an anchor tree (a special kind of metric tree [15]) will also render CS-EM effective in high dimensions, under the assumption of lower intrinsic dimensionality for the data. Future Work. Future work will include parallelization of the algorithm and extensions to 1) nonprobabilistic clustering methods, e.g., k-means clustering [6, 13, 5] and 2) general EM applications beyond mixture modeling. 8 References [1] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [2] P. S. Bradley, U. M. Fayyad, and C. A. Reina. Scaling EM (expectation maximization) clustering to large databases. Technical Report MSR-TR-98-3, Microsoft Research, 1998. [3] S. Dasgupta. Learning mixtures of Gaussians. In Proceedings of the 40th Annual Symposium on Foundations of Computer Science, pages 634?644, 1999. [4] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1?38, 1977. [5] G. Hamerly. Making k-means even faster. In SIAM International Conference on Data Mining (SDM), 2010. [6] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. An efficient k-means clustering algorithm: Analysis and implementation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7):881?892, 2002. [7] G. J. McLachlan and D. Peel. Finite Mixture Models. Wiley Interscience, New York, USA, 2000. [8] A. Moore. The anchors hierarchy: Using the triangle inequality to survive high-dimensional data. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, pages 397?405. AAAI Press, 2000. [9] A. W. Moore. A tutorial on kd-trees. Technical Report 209, University of Cambridge, 1991. [10] A. W. Moore. Very fast EM-based mixture model clustering using multiresolution kd-trees. In Advances in Neural Information Processing Systems, pages 543?549. Morgan Kaufman, 1999. [11] R. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in Graphical Models, pages 355?368, 1998. [12] L. E. Ortiz and L. P. Kaelbling. Accelerating EM: An empirical study. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pages 512?521, 1999. [13] D. Pelleg and A. Moore. Accelerating exact k-means algorithms with geometric reasoning. In S. Chaudhuri and D. Madigan, editors, Proceedings of the Fifth International Conference on Knowledge Discovery in Databases, pages 277?281. AAAI Press, 1999. [14] B. Thiesson, C. Meek, and D. Heckerman. Accelerating EM for large databases. Machine Learning, 45(3):279?299, 2001. [15] J. K. Uhlmann. Satisfying general proximity/similarity queries with metric trees. Information Processing Letters, 40(4):175?179, 1991. [16] J. J. Verbeek, J. R. Nunnink, and N. Vlassis. Accelerated EM-based clustering of large data sets. Data Mining and Knowledge Discovery, 13(3):291?307, 2006. [17] J. J. Verbeek, N. Vlassis, and J. R. J. Nunnink. A variational EM algorithm for large-scale mixture modeling. In In Proceedings of the 8th Annual Conference of the Advanced School for Computing and Imaging (ASCI), 2003. 9
3924 |@word msr:1 nd:2 km:5 decomposition:1 accounting:1 mention:1 tr:1 recursively:1 reduction:2 initial:9 series:1 ours:1 bradley:1 ka:2 current:2 com:1 written:1 must:1 refines:4 periodically:1 partition:66 plot:1 update:2 v:1 implying:2 intelligence:3 leaf:17 selected:1 fewer:1 accordingly:4 record:1 compo:1 coarse:4 node:24 location:3 complication:1 five:1 constructed:1 symposium:1 descendant:1 combine:1 interscience:1 expected:1 roughly:1 inspired:1 automatically:2 little:1 considering:1 increasing:2 becomes:1 conv:1 estimating:1 maximizes:1 lowest:2 kg:1 kind:1 kaufman:1 impractical:2 kldivergence:1 guarantee:3 pseudo:1 every:1 expands:1 act:1 exactly:3 prohibitively:1 unit:12 enjoy:1 organize:1 before:2 service:2 positive:1 accordance:1 local:5 encoding:1 mount:1 path:5 merge:2 might:2 black:1 chose:1 initialization:5 studied:1 suggests:1 ease:1 limited:1 averaged:1 hamerly:1 union:1 block:38 practice:1 differs:1 piatko:1 silverman:1 procedure:1 area:4 empirical:1 significantly:6 attain:1 convenient:1 boyd:1 pre:6 madigan:1 suggest:2 get:1 targeting:1 restriction:1 equivalent:1 optimize:1 customer:9 lagrangian:1 maximizing:1 center:3 straightforward:1 attention:1 starting:1 independently:1 convex:1 ke:3 assigns:1 bep:1 nesting:1 vandenberghe:1 variation:4 coordinate:1 updated:1 hierarchy:1 user:1 exact:2 cleaning:1 speak:1 us:2 expensive:1 satisfying:1 continues:1 cut:1 coarser:1 database:3 bottom:1 ft:6 inserted:1 wang:1 solved:2 initializing:1 ensures:2 trade:1 highest:1 ran:1 principled:2 mentioned:2 intuition:4 dempster:1 complexity:2 ideally:1 q5:1 solving:3 smart:1 creates:1 efficiency:2 basis:1 triangle:1 easily:1 k0:1 represented:1 xxt:1 separated:1 fast:3 shortcoming:1 describe:1 effective:1 artificial:2 query:1 exhaustive:2 heuristic:2 widely:1 spend:1 tightness:1 enlarges:1 statistic:11 itself:1 laird:1 final:2 sdm:1 eigenvalue:1 exemplary:2 propose:1 frequent:1 inserting:1 flexibility:1 tax:1 multiresolution:1 chaudhuri:1 description:1 kv:3 nunnink:2 scalability:1 convergence:17 empty:1 parent:3 csem:1 cluster:3 produce:3 bl0:1 incremental:2 generating:1 tk:1 object:1 derive:1 depending:1 school:1 eq:5 longitude:1 c:42 involves:1 implies:1 direction:3 kb:1 require:1 fix:1 generalization:1 investigation:1 county:1 tighter:1 insert:1 extension:1 hold:1 proximity:1 considered:2 exp:10 substituting:1 major:1 nents:1 estimation:4 uhlmann:1 contributor:1 create:1 city:1 mclachlan:1 rough:1 always:2 gaussian:2 modified:1 rather:2 reaching:1 factorizes:1 varying:3 l0:6 refining:5 improvement:5 likelihood:10 indicates:1 greatly:1 contrast:4 baseline:5 sense:1 dependent:1 membership:10 entire:3 favoring:1 selective:1 interested:1 provably:1 arg:5 dual:1 denoted:2 k6:1 spatial:1 constrained:1 special:2 initialize:1 equal:1 once:2 saving:1 construct:1 washington:1 sampling:5 having:1 represents:1 unnecessarily:2 survive:1 purchase:1 future:3 report:3 intelligent:1 few:2 randomly:1 simultaneously:1 divergence:2 individual:5 microsoft:3 ortiz:1 peel:1 huge:1 mining:2 evaluation:4 chong:1 introduces:1 mixture:34 truly:2 extreme:1 behind:2 primal:1 held:1 kt:2 closer:2 necessary:1 tree:29 incomplete:1 initialized:4 mk:6 complicates:1 modeling:6 downside:1 cover:1 maximization:2 cost:4 geo:1 kaelbling:1 subset:1 uniform:1 too:5 stored:3 reported:1 kn:1 synthetic:8 chooses:1 density:3 international:2 siam:1 probabilistic:1 off:1 systematic:1 again:1 aaai:2 central:1 satisfied:1 containing:1 recorded:2 possibly:2 positivity:1 priority:2 creating:1 leading:2 account:1 potential:2 b2:1 inc:1 performed:1 root:4 kl0:1 closed:3 lot:1 view:1 reached:1 start:3 maintains:2 defer:1 ascus:1 variance:1 characteristic:1 efficiently:2 correspond:1 identify:1 dry:1 reina:1 none:1 multiplying:1 drive:1 salon:2 reach:5 internship:1 hereby:1 associated:5 stop:1 gain:2 rational:1 sampled:1 recall:2 exemplify:1 knowledge:2 dimensionality:1 actually:1 qbk:38 formulation:1 done:1 evaluated:1 furthermore:1 until:6 hand:1 mpt:15 receives:1 overlapping:1 lack:1 defines:2 quality:4 indicated:1 b3:1 effect:3 name:1 concept:1 contain:1 multiplier:2 usa:1 hence:6 equality:1 analytically:1 leibler:1 iteratively:1 moore:4 neal:1 mahalanobis:1 during:2 bowling:1 prominent:1 outline:1 demonstrate:2 reasoning:1 variational:40 common:1 thiesson:2 empirically:1 rl:2 b4:1 tightens:1 million:1 discussed:1 significant:2 cambridge:2 tuning:1 similarly:1 pointed:1 reachable:1 moving:1 f0:2 similarity:2 base:1 posterior:1 hxxt:2 recent:1 closest:1 optimizes:1 store:6 certain:1 inequality:2 continue:1 scoring:1 morgan:1 minimum:2 fortunately:1 somewhat:2 relaxed:1 prune:1 converge:1 determine:1 relates:2 full:1 reduces:1 technical:3 faster:7 match:2 divided:1 equally:1 feasibility:1 calculates:1 qi:1 variant:3 basic:3 controlled:1 verbeek:2 essentially:1 expectation:2 metric:2 fifteenth:1 iteration:4 represent:2 smarter:1 achieved:2 background:1 remarkably:1 fine:4 whereas:1 addition:1 median:1 country:1 extra:1 parallelization:1 unlike:1 ascent:1 elegant:1 flow:1 gmms:1 effectiveness:1 call:1 counting:1 granularity:4 easy:1 xj:1 affect:1 competing:3 perfectly:1 gbk:13 reduce:1 simplifies:1 decline:1 regarding:1 sibling:1 idea:2 qj:1 motivated:1 expression:3 b5:2 utility:1 accelerating:3 f:5 queue:2 render:1 reformulated:1 york:1 proprietary:1 enumerate:1 clear:1 detailed:2 involve:1 amount:2 kanungo:1 processed:1 reduced:1 generate:1 tutorial:1 alters:1 notice:5 discrete:1 intertwined:2 dasgupta:1 affected:1 key:1 four:1 threshold:1 acknowledged:1 changing:1 gmm:1 ht:3 imaging:1 monotone:1 pelleg:1 sum:9 run:11 fourteenth:1 letter:1 powerful:1 uncertainty:2 family:4 almost:1 nail:2 wu:1 separation:1 scaling:1 bound:12 pay:1 guaranteed:3 meek:1 convergent:2 refine:3 annual:2 constraint:25 encodes:1 nearby:1 min:1 fayyad:1 qb:3 performing:2 conjecture:1 speedup:19 according:7 alternate:1 kd:10 across:6 smaller:1 em:129 slightly:1 heckerman:1 making:3 penetration:1 explained:1 gradually:2 restricted:2 computationally:2 mutually:1 discus:1 turn:1 needed:4 end:2 operation:1 gaussians:1 apply:1 hierarchical:2 away:3 appropriate:1 alternative:3 denotes:2 clustering:8 top:1 remaining:1 ensure:1 include:1 graphical:1 unsuitable:1 build:2 society:1 bl:12 objective:4 already:1 exclusive:1 distance:1 argue:1 reason:1 provable:4 consumer:1 ru:2 besides:1 code:1 relationship:1 ratio:6 ql:2 mostly:1 hlog:4 setup:1 potentially:1 stated:1 negative:1 pizza:1 implementation:3 perform:1 observation:2 finite:1 hinton:1 vlassis:2 locate:1 arbitrary:2 bk:54 kl:12 specified:1 fv:5 merges:1 tremendous:1 hour:1 maxq:1 address:1 beyond:1 exemplified:1 pattern:1 latitude:1 challenge:1 max:7 including:1 royal:1 greatest:1 suitable:1 demanding:3 business:11 treated:1 natural:1 ranked:1 advanced:1 scheme:2 improve:2 altered:1 brief:1 imply:1 naive:1 speeding:1 faced:1 geometric:3 literature:1 discovery:2 relative:1 expect:1 rationale:1 nonprobabilistic:1 foundation:1 integrate:1 degree:1 sufficient:7 consistent:4 rubin:1 principle:1 editor:1 netanyahu:1 storing:1 share:3 collaboration:1 filing:1 repeat:2 last:1 free:1 copy:1 enjoys:1 fall:1 fifth:1 sparse:2 boundary:1 calculated:3 xn:9 default:3 dimension:4 llc:1 qn:11 curve:1 made:1 refinement:19 dc2:2 far:5 transaction:1 pruning:2 approximate:2 kullback:1 dlog2:1 ml:1 reveals:2 anchor:2 b1:3 xi:1 thep:1 iterative:1 ku:2 necessarily:1 constructing:2 main:3 unmarked:2 identifier:1 child:7 repeated:1 ibk:9 x1:1 site:1 attractiveness:1 slow:2 wiley:1 embeds:1 sub:7 guiding:2 explicit:1 exponential:3 candidate:5 grained:3 down:1 minute:1 specific:29 jensen:1 insignificant:1 normalizing:2 exists:1 intrinsic:1 restricting:1 gained:1 supplement:2 subtree:1 overwhelmed:1 justifies:1 demand:1 likely:2 lagrange:3 expressed:1 lazy:1 bo:1 ch:5 corresponds:1 nested:1 marked:12 shared:3 replace:1 included:1 except:1 called:1 duality:1 experimental:3 selectively:2 select:2 internal:1 mark:7 support:1 latter:1 accelerated:1 princeton:1
3,229
3,925
Static Analysis of Binary Executables Using Structural SVMs Nikos Karampatziakis? Department of Computer Science Cornell University Ithaca, NY 14853 [email protected] Abstract We cast the problem of identifying basic blocks of code in a binary executable as learning a mapping from a byte sequence to a segmentation of the sequence. In general, inference in segmentation models, such as semi-CRFs, can be cubic in the length of the sequence. By taking advantage of the structure of our problem, we derive a linear-time inference algorithm which makes our approach practical, given that even small programs are tens or hundreds of thousands bytes long. Furthermore, we introduce two loss functions which are appropriate for our problem and show how to use structural SVMs to optimize the learned mapping for these losses. Finally, we present experimental results that demonstrate the advantages of our method against a strong baseline. 1 Introduction In this work, we are interested in the problem of extracting the CPU instructions that comprise a binary executable file. Solving this problem is an important step towards verifying many simple properties of a given program. In particular we are motivated by a computer security application, in which we want to detect whether a previously unseen executable contains malicious code. This is a task that computer security experts have to solve many times every day because in the last few years the volume of malicious software has witnessed an exponential increase (estimated at 50000 new malicious code samples every day). However, the tools that analyze binary executables require a lot of manual effort in order to produce a correct analysis. This happens because the tools themselves are based on heuristics and make many assumptions about the way a binary executable is structured. But why is it hard to find the instructions inside a binary executable? After all, when running a program the CPU always knows which instructions it is executing. The caveat here is that we want to extract the instructions from the executable without running it. On one hand, running the executable will in general reveal little information about all possible instructions in the program, and on the other hand it may be dangerous or even misguiding.1 Another issue that makes this task challenging is that binary executables contain many other things except the instructions they will execute.2 Furthermore, the executable does not contain any demarcations about the locations of instructions in the file.3 Nevertheless, an executable file is organized into sections such as a code section, a section with constants, a section containing global variables etc. But even inside the code section, there is a lot more than just a stream of instructions. We will ? http://www.cs.cornell.edu/?nk Many malicious programs try to detect whether they are running under a controlled environment. 2 Here, we are focusing on Windows executables for the Intel x86 architecture, though everything carries over to any other modern operating system and any other architecture with a complex instruction set. 3 Executables that contain debugging information are an exception, but most software is released without it 1 1 refer to all instructions as code and to everything else as data. For example, the compiler may, for performance reasons, prefer to pad a function with up to 3 data bytes so that the next function starts at an address that is a multiple of 4. Moreover, data can appear inside functions too. For example, a ?switch? statement in C is usually implemented in assembly using a table of addresses, one for each ?case? statement. This table does not contain any instructions, yet it can be stored together with the instructions that make up the function in which the ?switch? statement appears. Apart from the compiler, the author of a malicious program can also insert data bytes in the code section of her program. The ultimate goal of this act is to confuse the heuristic tools via creative uses of data bytes. 1.1 A text analogy To convey more intuition about the difficulties in our task we will use a text analogy. The following is an excerpt from a message sent to General Burgoyne during the American revolutionary war [1]: You will have heard, Dr Sir I doubt not long before this can have reached you that Sir W. Howe is gone from hence. The Rebels imagine that he is gone to the Eastward. By this time however he has filled Chesapeak bay with surprize and terror. Washington marched the greater part of the Rebels to Philadelphia in order to oppose Sir Wm?s. army. The sender also sent a mask via a different route that, when placed on top of the message, revealed only the words that are shown here in bold. Our task can be thought as learning what needs to be masked so that the hidden message is revealed. In this sense, words play the role of instructions and letters play the role of bytes. For complex instruction sets like the Intel x86, instructions are composed of a variable number of bytes, as words are composed of a variable number of letters. There are also some minor differences. For example, programs have control logic (i.e. execution can jump from one point to another), while text is read sequentially. Moreover, programs do not have spaces while most written languages do (exceptions are Chinese, Japanese, and Thai). This analogy motivates tackling our problem as predicting a segmentation of the input sequence into blocks of code and blocks of data. An obvious first approach for this task would be to treat it as a sequence labeling problem and train, for example, a linear chain conditional random field (CRF) [2] to tag each byte in the sequence as being the beginning, inside, or outside of a data block. However this approach ignores much of the problem?s structure, most importantly that transitions from code to data can only occur at specific points. Instead, we will use a more flexible model which, in addition to sequence labeling features, can express features of whole code blocks. Inference in our model is as fast as for sequence labeling and we show a connection to weighted interval scheduling. This strikes a balance between efficient but simple sequence labeling models such as linear chain CRFs, and expressive but slow4 segmentation models such as semi-CRFs [3] and semi-Markov SVMs [4]. To learn the parameters of the model, we will use structural SVMs to optimize performance according to loss functions that are appropriate for our task, such as the sum of incorrect plus missed CPU instructions induced by the segmentation. Before explaining our model in detail, we present some background on the workings of widely used tools for binary code analysis in section 2, which allows us to easily explain our approach in section 3. We empirically demonstrate the effectiveness of our model in section 4 and discuss related work and other applications in section 5. Finally, section 6 discusses future work and states our conclusions. 2 Heuristic tools for analyzing binary executables Tools for statically analyzing binary executables differ in the details of their workings but they all share the same high level logic, which is called recursive disassembly.5 The tool starts by obtaining the address of the first instruction from a specific location inside the executable. It then places this address on a stack and executes the following steps while the stack is non-empty. It takes the next More specifically, inference needs O(nL2 ) time where L is an a priori bound on the lengths of the segments (L = 2800 in our data) and n is the length of the sequence. With additional assumptions on the features, [5] gives an O(nM ) algorithm where M is the maximum span of any edge in the CRF. 5 Two example tools are IdaPro (http://www.hex-rays.com/idapro) and OllyDbg (http://www.ollydbg.de) 4 2 address from the stack and disassembles (i.e. decompiles to assembly) the sequence starting from that address. All the disassembled instructions would execute one after the other until we reach an instruction that changes the flow of execution. These control flow instructions, are conditional and unconditional jumps, calls, and returns. After the execution of an unconditional jump the next instruction to be executed is at the address specified by the jump?s argument. Other control flow instructions are similar to the unconditional jump. A conditional jump also specifies a condition and does nothing if the condition is false. A call saves the address of the next instruction and then jumps. A return jumps to the address saved by a call (and does not need an address as an argument). The tool places the arguments of control flow instructions it encounters on the stack. If the control flow instruction is a conditional jump or a call, it continues disassembling, otherwise it takes the next address, that has not yet been disassembled, from the stack and repeats. Even though recursive disassembly seems like a robust way of extracting the instructions from a program, there are many reasons that can make it fail [6]. Most importantly, the arguments of the control flow instructions do not have to be constants, they can be registers whose values are generally not available during static analysis. Hence, recursive disassembly can ran out of addresses much before all the instructions have been extracted. After this point, the tool has to resort to heuristics to populate its stack. For example, a heuristic might check for positions in the sequence that match a hand-crafted regular expression. Furthermore, some heuristics have to be applied on multiple passes over the sequence. According to its documentation, OllyDbg does 12 passes over the sequence. Recursive disassembly can also fail because of its assumptions. Recall that after encountering a call instruction, it continues disassembling the next instruction, assuming that the call will eventually return to execute it. Similarly for a conditional jump it assumes that both branches can potentially execute. Though these assumptions are reasonable for most programs, malicious programs can exploit them to confuse the static analysis tools. For example, the author of a malicious program can write a function that, say, adds 3 to the return address that was saved by the call instruction. This means that if the call instruction was spanning positions a, . . . , a + ` ? 1 of the sequence, upon the function?s return the next instruction will be at position a + ` + 3, not at a + `. This will give a completely different decoding of the sequence and is called disassembly desynchronization. To return to a text analogy, recursive disassembly parses the sequence ?driverballetterrace? as [driver, ballet, terrace] while the actual parsing, obtained by starting three positions down, is [xxx, verbal, letter, race], where x denotes junk data. 3 A structured prediction model In this section we will combine ideas from recursive disassembly and structured prediction to derive an expressive and efficient model for predicting the instructions inside a binary executable. As in recursive disassembly, if we are certain that code begins at position i we can unambiguously disassemble the byte sequence starting from position i until we reach a control flow instruction. But unlike recursive disassembly, we maintain a trellis graph, a directed graph that succinctly represents all possibilities. The trellis graph has vertices bi that denote the possibility that a code block starts at position i. It also has vertices ej and edges (bi , ej ) which denote that disassembling from position i yields a possible code block that spans positions i, . . . , j. Furthermore, vertices di denote the possibility that the i-th position is part of a data block. Edges (ej , bj+1 ) and (ej , dj+1 ) encode that the next byte after a code block can either be the beginning of another code block, or a data byte respectively. For data blocks no particular structure is assumed and we just use edges (dj , dj+1 ) and (dj , bj+1 ) to denote that a data byte can be followed either by another data byte or by the beginning of a code block respectively. Finally, we include vertices s and t and edges (s, b1 ), (s, d1 ), (dn , t) and (en , t) to encode that sequences can start and end either with code or data. An example is shown in Figure 1. The graph encodes all possible valid6 segmentations of the sequence. In fact, there is a simple bijection P from any valid segmentation y to an s ? t path P (y) in this graph. For example, the sequence in Figure 1 contains three code blocks that span positions 1?7, 8?8, and 10?12. This segmentation can be encoded by the path s, b1 , e7 , b8 , e8 , d9 , b10 , e12 , t. 6 Some subsequences will produce errors while decoding to assembly because some bytes may not correspond to any instructions. These could never be valid code blocks because they would crash the program. Also the program cannot do something interesting and crash in the same code block; interesting things can only happen with system calls which, being call instructions, have to be at the end of their code block 3 83 c7 04 add edi 4 s 3b fe cmp edi esi 72 11 jb 0x401018 c3 90 ret 40 inc eax 75 10 jnz 0x40101c d1 d2 d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b11 b12 e7 e8 mov [ebx+edi] 0xc31172fe adc ebx eax nop inc eax e12 jnz 0x40101c nop inc eax jnz 0x40101c t Figure 1: The top line shows an example byte sequence in hexadecimal. Below this, we show the actual x86 instructions with position 9 being a data byte. We show both the mnemonic instructions as well as the bytes they are composed of. Some alternative decodings of the sequence are shown on the bottom. The decoding that starts from the second position is able to skip over two control flow instructions. In the middle we show the graph that captures all possible decodings of the sequence. Disassembling from positions 3, 5, and 12 leads to decoding errors. As usual for predicting structured outputs [2] [7], we define the score of a segmentation y for a sequence x to be the inner product w> ?(x, y) where w ? Rd are the parameters of our model and ?(x, y) ? Rd is a vector of features that captures the compatibility of the segmentation y and the sequence x. Given a sequence x and a vector of parameters w, the inference task is to find the highest scoring segmentation y? = argmax w> ?(x, y) (1) y?Y where Y is the space of all valid segmentations of x. We will assume that ?(x, y) decomposes as X ?(x, y) := ?(u, v, x) (u,v)?P (y) where ?(u, v, x) is a vector of features that can be computed using only the endpoints of edge (u, v) and the byte sequence. This assumption allows efficient inference because (1) can be rewritten as X y? = argmax w> ?(u, v, x) y?Y (u,v)?P (y) which we recognize as computing the heaviest path in the trellis graph with edge weights given by w> ?(u, v, x). This problem can be solved efficiently with dynamic programming by visiting each vertex in topological order and updating the longest path to each of its neighbors. The inference task can be viewed as a version of the weighted interval scheduling problem. Disassembling from position i in the sequence yields an interval [i, j] where j is the position where the first encountered control flow instruction ends. In weighted interval scheduling we want to select a subset of non-overlapping intervals with maximum total weight. Our inference problem is the same except we also have a cost for switching to the next interval, say the one that starts at position j + 2, which is captured by the cost of the path ej , dj+1 , bj+2 . Finally, the dynamic programming algorithm for solving this version is a simple modification of the classic weighted interval scheduling algorithm. Section 5 discusses other setups where this inference problem arises. 3.1 Loss functions Now we introduce loss functions that measure how close an inferred segmentation y? is to the real one y. First, we argue that Hamming loss, how well the bytes of the blocks in y? overlap with with the bytes of the blocks in y, is not appropriate for this task because, as we recall from the text 4 analogy at the end of section 2, two blocks may be overlapping very well but they may lead to completely different decodings of the sequence. Hence, we introduce two loss functions which are more appropriate for our task. The first loss function, which we call block loss, comes from the observation that the beginnings of the code blocks are necessary and sufficient to describe the segmentation. Therefore, we let y and y? be the sets of positions where the code blocks start in the two segmentations and the block loss counts how well these two sets overlap using the cardinality of their symmetric difference ?B (y, y?) = |y| + |? y | ? 2|y ? y?| The second loss function, which we call instruction loss, is a little less stringent. In the case where the inferred y? identifies, say, the second instruction in a block as its start, we would like to penalize this less, since the disassembly is still synchronized, and only missed one instruction. Formally, we let y and y? be the sets of positions where the instructions start in the two segmentations and we define the instruction loss ?I (y, y?) to be the cardinality of their symmetric difference. As an example, consider the segmentation which corresponds to path s, d1 , b2 , e12 , t in Figure 1. Therefore y? = {2} and from the figure we see that the segmentation in the top line has to pass through b1 , b8 , b10 i.e. y = {1, 8, 10}. Hence its block loss is 4 because it misses b1 , b8 , b10 and it introduces b2 . For the instruction loss, the positions of the real instructions are y = {1, 4, 6, 8, 10, 11} while the proposed segmentation has y? = {2, 9, 10, 11}. Taking the symmetric difference of these sets, we see that the instruction loss has value 6. Finally a variation of these loss functions occurs when we aggregate the losses over a set of sequences. If we simply sum the losses for each sequence then the losses in longer executables may overshadow the losses on shorter ones. To represent each executable equally in the final measure we can normalize our loss functions, for example we can define the normalized instruction loss to be ?N I (y, y?) = |y| + |? y | ? 2|y ? y?| |y| and we similarly define a normalized block loss ?N B . If |? y | = |y|, ?N I and ?N B are scaled versions of a popular loss function 1 ? F1 , where F1 is the harmonic mean of precision and recall. 3.2 Training Given a set of training pairs (xi , yi ) i = 1, . . . , n of sequences and segmentations we can learn a vector of parameters w, that assigns a high score to segmentation yi and a low score to all other possible segmentations of xi . For this we will use the structural SVM formulation with margin rescaling [7] that solves the following problem: n 1 CX 2 min ||w|| + ?i w,?i 2 n i=1 s.t. ?i ?? y ? Yi : w> ?(xi , yi ) ? w> ?(xi , y?) ? ?(yi , y?) ? ?i The constraints of this optimization problem enforce that the difference in score between the correct segmentation y and any incorrect segmentation y? is at least as large as the loss ?(yi , y?). If y?i is the inferred segmentation then the slack variable ?i upper bounds ?(yi , y?i ). Hence, the objective is a tradeoff between a small upper bound of the average training loss and a low-complexity hypothesis w. The tradeoff is controlled by C which is set using cross-validation. Since the sets of valid segmentations Yi are exponentially large, we solve the optimization problem with a cutting plane algorithm [7]. We start with an empty set of constraints and in each iteration we find the most violated constraint for each example. We add these constraints in our optimization problem and re-optimize. We do this until there are no constraints which are violated by more than a prespecified tolerance . This procedure will terminate after O( 1 ) iterations [8]. For a training pair (xi , yi ) the most violated constraint is: y? = argmax w> ?(xi , y?) + ?(yi , y?) (2) y??Yi Apart from the addition of ?(yi , y?), this is the same as the inference problem. For the losses we introduced, we can solve the above problem with the same inference algorithm in a slightly modified 5 Maximum Average Bytes 49152 16712 Blocks 3502 887 Block length (bytes) 2794 13 Block length (instructions) 1009 4 Table 1: Some statistics about the executable sections of the programs in the dataset trellis graph. More precisely, for every vertex v we can define a cost c(v) for visiting it (this can be absorbed into the costs of v?s incoming edges) and find the longest path in this modified graph. This is possible because our losses decompose over the vertices of the graph. This is not true for losses such 1 ? F1 for which (2) seems to require time quadratic in the length of the sequence. For the block loss, the costs are defined as follows. If bi ? y then c(di ) = 1. This encodes that using di instead of bi misses the beginning of one block. If bi ? / y then bi defines an incorrect code block which spans bytes i, . . . , j and c(bi ) = 1 + |{k|bk ? y ? i < k ? j}|, capturing that we will introduce one incorrect block and we will skip all the blocks that begin between positions i and j. All other vertices in the graph have zero cost. In Figure 1 vertices d1 , d8 and d10 have a cost of 1, while b2 , b4 , b6 , b7 , b9 , and b11 have costs 3, 1, 1, 3, 2, and 1 respectively. For the instruction loss, y is a set of instruction positions. Similarly to the block loss if i ? y then c(di ) = 1. If i ? / y then bi is the beginning of an incorrect block that spans bytes i, . . . , j and produces instructions in a set of positions y?i . Let s be the first position in this block that gets synchronized with the correct decoding i.e. s = min(? yi ? y) with s = j if the intersection is empty. Then c(bi ) = |{k|k ? y?i ? i ? k < s}| + |{k|k ? y ? i < k < s}|. The first term captures the number of incorrect instructions produced by treating bi as the start of a code block, while the second term captures the number of missed real instructions. All other vertices in the graph have zero cost. In Figure 1 vertices d1 , d4 , d6 , d8 , d10 and d11 have a cost of 1, while b2 , b7 , and b9 have costs 5, 3, and 1 respectively. For the normalized losses, we simply divide the costs by |y|. 4 Experiments To evaluate our model we tried two different ways of collecting data, since we could not find a publicly available set of programs together with their segmentations. First, we tried using debugging information, i.e. compile a program with and without debugging information and use the debug annotations to identify the code blocks. This approach could not discover all code blocks, especially when the compiler was automatically inserting code that did not exist in the source, such as the calls to destructors generated by C++ compilers. Therefore we resorted to treating the output of OllyDbg, a heuristic tool, as the ground truth. Since the executables we used were 200 common programs from a typical installation of Windows XP, we believe that the outputs of heuristic tools should have little noise. For a handful of programs we manually verified that another heuristic tool, IdaPro, mostly agreed with OllyDbg. Of course, our model is a general statistical model and given an expressive feature map, it can learn any ground truth. In this view the experiments suggest the relative performance of the compared models. The dataset, and an implementation of our model, is available at http://www.cs.cornell.edu/?nk/svmwis. Table 1 shows some statistics of the dataset. We use two kinds of features, byte-level and instruction-level features. For each edge in the graph, the byte-level features are extracted from an 11 byte window around the source of the edge (so if the source vertex is at position i, the window spans positions i ? 5, . . . , i + 5). The features are which bytes and byte pairs appear in which position inside the window. An example feature is ?does byte c3 appear in position i ? 1??. In x86 architectures, when the previous instruction is a return instruction this feature fires. Of course, it also fires in other cases and that is why we need instruction-level features. These are obtained from histograms of instructions that occur in candidate code blocks (i.e. edges of the form (bi , ej )). We use two kinds of histograms, one where we abstract the values of the arguments of the instructions but keep their type (register, memory location or constant), and one where we completely discard all information about the arguments. An example of the former type of feature would be ?number of times the instruction [add register, register] appears in this block?. An example of the latter type of feature would be ?number of times the instruction [mov] appears in this block?. In total, we have 2.3 million features. Finally, we normalize the features by dividing them by the length of the sequence. 6 Greedy SVMhmm SVMwis ?I SVMwis ?N I SVMwis ?B SVMwis ?N B ?H 1623.6 236.2 98.8 104.3 86.5 85.2 ? ? ?N H L 1916.6 201.3 115.6 103.7 98.2 87.2 ?I 2164.3 ? 44.6 45.5 39.6 40.6 I? ? ?N I 7045.2 ? 98.0 79.7 80.2 75.4 ?B 1564.9 45.1 26.1 30.5 21.5 23.4 ? ? ?N B B 4747.2 46.9 41.1 35.5 32.1 29.8 Table 2: Empirical results. ?H is Hamming loss. Normalized losses (?N X ) are multiplied with the ? instructions (I), ? or blocks (B) ? to bring all numbers to a similar scale. average number of bytes (L), We compare our model SVMwis (standing for weighted interval scheduling, to underscore that it is not a general segmentation model), trained to minimize the losses we introduced, with a very strong baseline, a discriminatively trained HMM (using SVMhmm ). This model uses only the bytelevel features since it cannot express the instruction-level features. It tags each byte as being the beginning, inside or outside of a code block using Viterbi and optimizes Hamming loss. Running a general segmentation model [4] was impractical since inference depends quadratically on the maximum length of the code blocks, which was 2800 in our data. Finally, it would be interesting to compare with [5], but we could not find their inference algorithm available as a ready to use software. For all experiments we use five fold cross-validation where three folds are used for training one fold for validation (selecting C) and one fold for testing. Table 2 shows the results of our comparison for different loss functions (columns): Hamming loss, instruction loss, block loss, and their normalized counterparts. Results for normalized losses have ? instructions (I), ? or blocks (B) ? to bring all been multiplied with the average number of bytes (L), hmm numbers to a similar scale. To highlight the stregth of our main baseline, SVM , we have included a very simple baseline which we call greedy. Greedy starts decoding from the begining of the sequence and after decoding a block (bi , ej ) it repeats at position j + 1. It only marks a byte as data if the decoding fails, in which case it starts decoding from the next byte in the sequence. The results suggest that just treating our task as a simple sequence labeling problem at the level of bytes already goes a long way in terms of Hamming loss and block loss. SVMhmm sometimes predicts as the beginning of a code block a position that leads to a decoding error. Since it is not clear how to compute the instruction loss in this case, we do not report instruction losses for this model. The last four rows of the table show the results for our model, trained to minimize the loss indicated on each line. We observe a further reduction in loss for all of our models. To assess this reduction, we used paired Wilcoxon signed rank tests between the losses of SVMhmm ?s predictions and the losses of our model?s predictions (200 pairs). For all four models the tests suggest a statistically significant improvement over SVMhmm at the 1% level. For the block loss and its normalized version ?N B , we see that the best performance is obtained for the model trained to minimize the respective loss. However this is not true for the other loss functions. For the Hamming loss, this is expected since the SVMwis models are more expressive and a small block loss or instruction loss implies a small Hamming loss, but not vice versa. For the instruction loss, we believe this occurs because of two reasons. First our data consists of benign programs and for them learning to identify the code blocks may be enough. Second it may be harder to learn with the instruction loss since its value depends on how quickly each decoding synchronizes with another (the correct) decoding of the stream, something that is not modeled in the feature map we are using. The end result is that the models trained for block loss also attain the smallest losses for all other loss functions. 5 Related work and other applications There are two lines of research which are relevant to this work: one is structured prediction approaches for segmenting sequences and the other is research on static analysis techniques for finding code and data blocks in executables. Segmentation of sequences can be done via sequence labeling e.g. [9]. If features of whole segments are needed then more expressive models such as semi-CRFs [3] or semi-Markov SVMs [4] can be used. The latter work introduced training of segmentation models for specific losses. However, if the segments are allowed to be long enough, these models have polynomial but impractical inference complexity. With additional assumptions on the features 7 [5] gives an efficient, though somewhat complicated, inference algorithm. In our model inference takes linear time, is simple to implement, and does not depend on the length of the segments. Previous techniques for identifying code blocks in executables have used no or very little statistical learning. For example, [10] and [11] use recursive disassembly and pattern heuristics similarly to currently used tools such as OllyDbg and IdaPro. These heuristics make many assumptions about the data which are lucidly explained in [6]. In this work, the authors use simple statistical models based on unigram and bigram instruction models in addition to the pattern heuristics. However, these approaches make independent decisions for every candidate code block and they have a less principled way of dealing with equally plausible but overlapping code blocks. Our work is most similar to [12] which uses a CRF to locate the entry points of functions. They use features that induce pairwise interactions between all possible positions in the executable which makes their formulation intractable. They perform approximate inference with a custom iterative algorithm but this is still slow. Our model can capture all the types of features that were used in that model except one. This feature encodes whether an address that is called by a function is not marked as a function and including this in our structure would make exact inference NP-hard. One way to approximate this feature would be to count how many candidate code blocks have instructions that jump to or call the current position in the sequence. For their task, compiling with debugging information was enough to get real labels and they showed that, according to these labels, heuristic tools are outperformed by their learning approach. Finally, we conclude this section with a discussion on the broader impact of this work. Our model is a general structured learning model and can be used in many sequence labeling problems. First, it can encode all features of a linear chain CRF and can simulate it by specifying a structure where each block is required to end at the same position where it starts. Furthermore, it can be used for any application where each position can yield at most one or a small number of arbitrarily long possible intervals and still have linear time inference, while inference in segmentation models depends on the length of the segments. Applications of this form can arise in any kind of scheduling problem where we want to learn a scheduler from example schedules. For example, a news website may decide to show an ad in their front page together with their news stories. Each advertiser submits an ad along with the times on which they want the ad to be shown. The news website can train a model like the one we proposed based on past schedules and the observed total profit for each of those days. The profit may not be directly observable for each individual ad depending on who serves the ads. When one or more ads change in the future, the model can still create a good schedule because its decisions depend on the features of the ads (such as the words in each ad), the time selected for displaying the ad as well as the surrounding ads. 6 Conclusions In this work we proposed a code segmentation model SVMwis that can help security experts in the static analysis of binary executables. We showed that inference in this model is as fast as for sequence labeling, even though our model can have features that can be computed from entire blocks of code. Moreover, our model is trained for the loss functions that are appropriate for the task. We also compared our model with a very strong baseline, a sequence labeling approach using a discriminatively trained HMM, and showed that we consistently outperform it. In the future we would like to use data annotated with real segmentations which might be possible to extract via a closer look at the compilation and linking process. We also want to look into richer features such as some approximation of call consistency (since the actual constraints give rise to NPhard inference), so that addresses which are targets of call or jump instructions from a code block do not lie inside data blocks. Finally, we plan to extend our model to allow for joint segmentation and classification of the executable as malicious or not. Acknowledgments I would like to thank Adam Siepel for bringing segmentation models to my attention and Thorsten Joachims, Dexter Kozen, Ainur Yessenalina, Chun-Nam Yu, and Yisong Yue for helpful discussions. 8 References [1] F. B. Wrixon Codes, Ciphers, Secrets and Cryptic Communication. page 490, Black Dog & Leventhal Publishers, 2005. [2] John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML ?01: Proceedings of the Eighteenth International Conference on Machine Learning, pages 282?289, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc. [3] S. Sarawagi and W.W. Cohen. Semi-markov conditional random fields for information extraction. Advances in Neural Information Processing Systems, 17:1185?1192, 2005. [4] Q. Shi, Y. Altun, A. Smola, and SVN Vishwanathan. Semi-Markov Models for Sequence Segmentation. In Proceedings of the 2007 EMNLP-CoNLL. [5] S. Sarawagi. Efficient inference on sequence segmentation models. In Proceedings of the 23rd international conference on Machine learning, page 800. ACM, 2006. [6] C. Kruegel, W. Robertson, F. Valeur, and G. Vigna. Static disassembly of obfuscated binaries. In Proceedings of the 13th conference on USENIX Security Symposium-Volume 13, page 18. USENIX Association, 2004. [7] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6(2):1453, 2006. [8] T. Joachims, T. Finley, and C-N. Yu. Cutting-Plane Training of Structural SVMs. Machine Learning, 77(1):27, 2009. [9] F. Sha and F. Pereira. Shallow parsing with conditional random fields. In Proceedings of HLT-NAACL, pages 213?220, 2003. [10] H. Theiling. Extracting safe and precise control flow from binaries. In Seventh International Conference on Real-Time Computing Systems and Applications, pages 23?30, 2000. [11] C. Cifuentes and M. Van Emmerik. UQBT: Adaptable binary translation at low cost. Computer, 33(3):60?66, 2000. [12] N. Rosenblum, X. Zhu, B. Miller, and K. Hunt. Learning to analyze binary computer code. In Conference on Artificial Intelligence (AAAI 2008), Chicago, Illinois, 2008. 9
3925 |@word version:4 middle:1 bigram:1 polynomial:1 seems:2 instruction:76 d2:1 tried:2 profit:2 harder:1 carry:1 reduction:2 contains:2 score:4 selecting:1 past:1 current:1 com:1 yet:2 tackling:1 written:1 parsing:2 john:1 chicago:1 happen:1 benign:1 hofmann:1 treating:3 siepel:1 greedy:3 selected:1 website:2 intelligence:1 disassembly:12 plane:2 beginning:8 mccallum:1 prespecified:1 caveat:1 bijection:1 location:3 five:1 dn:1 along:1 driver:1 symposium:1 incorrect:6 consists:1 terrace:1 combine:1 ray:1 inside:9 introduce:4 pairwise:1 secret:1 mask:1 expected:1 themselves:1 automatically:1 cpu:3 little:4 window:5 actual:3 cardinality:2 begin:2 discover:1 moreover:3 what:1 kind:3 ret:1 adc:1 finding:1 impractical:2 every:4 collecting:1 act:1 scaled:1 control:10 installation:1 appear:3 segmenting:2 before:3 treat:1 switching:1 analyzing:2 path:7 might:2 plus:1 signed:1 black:1 b12:1 challenging:1 compile:1 specifying:1 hunt:1 oppose:1 gone:2 bi:12 statistically:1 directed:1 practical:1 acknowledgment:1 testing:1 recursive:9 block:64 implement:1 sarawagi:2 procedure:1 empirical:1 thought:1 attain:1 word:4 induce:1 regular:1 suggest:3 submits:1 get:2 cannot:2 close:1 altun:2 tsochantaridis:1 scheduling:6 optimize:3 www:4 map:2 eighteenth:1 crfs:4 shi:1 go:1 attention:1 starting:3 d12:1 identifying:2 assigns:1 d5:1 importantly:2 nam:1 classic:1 variation:1 imagine:1 play:2 target:1 exact:1 programming:2 us:3 hypothesis:1 documentation:1 robertson:1 updating:1 continues:2 predicts:1 bottom:1 role:2 observed:1 solved:1 verifying:1 capture:5 thousand:1 news:3 e8:2 highest:1 ran:1 principled:1 intuition:1 environment:1 complexity:2 thai:1 esi:1 dynamic:2 trained:7 depend:2 solving:2 segment:5 upon:1 completely:3 easily:1 joint:1 surrounding:1 train:2 fast:2 describe:1 artificial:1 labeling:10 aggregate:1 outside:2 whose:1 heuristic:13 widely:1 solve:3 encoded:1 say:3 plausible:1 otherwise:1 richer:1 statistic:2 unseen:1 final:1 sequence:48 advantage:2 interaction:1 product:1 inserting:1 relevant:1 x86:4 normalize:2 empty:3 produce:3 adam:1 executing:1 help:1 derive:2 depending:1 andrew:1 minor:1 kruegel:1 solves:1 dividing:1 implemented:1 c:3 skip:2 come:1 implies:1 synchronized:2 differ:1 strong:3 safe:1 correct:4 saved:2 annotated:1 stringent:1 everything:2 require:2 f1:3 obfuscated:1 decompose:1 insert:1 around:1 b9:3 ground:2 mapping:2 bj:3 viterbi:1 smallest:1 released:1 outperformed:1 label:2 currently:1 cipher:1 vice:1 create:1 tool:16 weighted:5 always:1 e7:2 modified:2 ej:7 cornell:4 cmp:1 dexter:1 broader:1 encode:3 joachim:3 longest:2 improvement:1 rank:1 karampatziakis:1 check:1 consistently:1 underscore:1 ebx:2 baseline:5 detect:2 demarcation:1 sense:1 inference:23 helpful:1 entire:1 pad:1 her:1 hidden:1 interested:1 compatibility:1 issue:1 classification:1 flexible:1 priori:1 plan:1 field:4 comprise:1 never:1 extraction:1 washington:1 manually:1 represents:1 yessenalina:1 look:2 yu:2 icml:1 future:3 jb:1 report:1 np:1 few:1 modern:1 composed:3 recognize:1 individual:1 argmax:3 fire:2 maintain:1 message:3 possibility:3 custom:1 introduces:1 d11:2 unconditional:3 compilation:1 chain:3 edge:11 closer:1 necessary:1 shorter:1 respective:1 filled:1 divide:1 re:1 witnessed:1 column:1 cost:13 vertex:12 subset:1 entry:1 hundred:1 masked:1 seventh:1 too:1 front:1 stored:1 my:1 international:3 standing:1 probabilistic:1 decoding:15 together:3 b8:4 d9:2 quickly:1 heaviest:1 aaai:1 nm:1 yisong:1 containing:1 emnlp:1 dr:1 d8:3 expert:2 american:1 resort:1 return:7 rescaling:1 doubt:1 rosenblum:1 de:1 bold:1 b2:5 inc:4 register:4 race:1 depends:3 stream:2 ad:10 try:1 lot:2 view:1 analyze:2 compiler:4 wm:1 start:14 reached:1 complicated:1 annotation:1 b6:2 minimize:3 ass:1 publicly:1 kaufmann:1 who:1 efficiently:1 miller:1 yield:3 correspond:1 identify:2 produced:1 executes:1 e12:3 explain:1 nl2:1 reach:2 manual:1 hlt:1 against:1 c7:1 obvious:1 di:4 static:6 junk:1 hamming:7 dataset:3 popular:1 recall:3 segmentation:38 organized:1 agreed:1 schedule:3 adaptable:1 focusing:1 appears:3 day:3 xxx:1 unambiguously:1 formulation:2 execute:4 though:5 done:1 furthermore:5 just:3 smola:1 until:3 synchronizes:1 hand:3 working:2 expressive:5 disassembled:2 overlapping:3 d10:3 defines:1 reveal:1 indicated:1 believe:2 b3:1 usa:1 naacl:1 contain:4 normalized:7 true:2 counterpart:1 former:1 hence:5 read:1 symmetric:3 during:2 d4:2 crf:4 demonstrate:2 bring:2 harmonic:1 common:1 executable:15 empirically:1 cohen:1 b4:2 endpoint:1 volume:2 exponentially:1 million:1 he:2 linking:1 extend:1 association:1 refer:1 significant:1 versa:1 rd:3 debug:1 consistency:1 similarly:4 illinois:1 language:1 dj:5 encountering:1 operating:1 longer:1 etc:1 add:4 something:2 nop:2 wilcoxon:1 showed:3 optimizes:1 apart:2 discard:1 route:1 certain:1 binary:16 arbitrarily:1 yi:13 scoring:1 captured:1 morgan:1 greater:1 additional:2 nikos:1 somewhat:1 fernando:1 strike:1 advertiser:1 semi:7 branch:1 multiple:2 d7:1 match:1 cross:2 long:5 mnemonic:1 equally:2 paired:1 controlled:2 impact:1 prediction:5 basic:1 iteration:2 represent:1 histogram:2 sometimes:1 penalize:1 addition:3 want:6 background:1 crash:2 interval:9 else:1 malicious:8 howe:1 source:3 ithaca:1 publisher:2 unlike:1 bringing:1 file:3 pass:2 induced:1 yue:1 sent:2 thing:2 flow:10 kozen:1 effectiveness:1 lafferty:1 call:17 extracting:3 structural:5 revealed:2 enough:3 switch:2 b7:3 architecture:3 disassembling:5 inner:1 idea:1 tradeoff:2 svn:1 whether:3 motivated:1 war:1 expression:1 b5:1 ultimate:1 effort:1 generally:1 heard:1 clear:1 overshadow:1 ten:1 svms:6 http:4 specifies:1 outperform:1 exist:1 estimated:1 write:1 express:2 begining:1 four:2 nevertheless:1 d3:1 verified:1 resorted:1 graph:13 year:1 sum:2 letter:3 you:2 revolutionary:1 place:2 reasonable:1 decide:1 missed:3 excerpt:1 decision:2 prefer:1 ballet:1 conll:1 capturing:1 bound:3 followed:1 fold:4 topological:1 quadratic:1 encountered:1 dangerous:1 occur:2 constraint:7 precisely:1 handful:1 vishwanathan:1 software:3 encodes:3 tag:2 simulate:1 argument:6 span:6 min:2 statically:1 department:1 structured:7 according:3 creative:1 debugging:4 slightly:1 shallow:1 modification:1 happens:1 explained:1 thorsten:1 previously:1 discus:3 eventually:1 fail:2 count:2 slack:1 know:1 needed:1 end:6 serf:1 available:4 rewritten:1 multiplied:2 observe:1 appropriate:5 enforce:1 save:1 alternative:1 encounter:1 compiling:1 top:3 running:5 assumes:1 assembly:3 denotes:1 include:1 cryptic:1 exploit:1 chinese:1 especially:1 objective:1 already:1 occurs:2 sha:1 usual:1 visiting:2 thank:1 d6:2 hmm:3 vigna:1 argue:1 reason:3 spanning:1 rebel:2 assuming:1 code:44 length:10 modeled:1 balance:1 setup:1 executed:1 mostly:1 fe:1 statement:3 potentially:1 rise:1 implementation:1 motivates:1 perform:1 upper:2 observation:1 markov:4 communication:1 precise:1 locate:1 stack:6 usenix:2 inferred:3 edi:3 introduced:3 cast:1 pair:4 specified:1 c3:2 connection:1 bk:1 security:4 required:1 dog:1 learned:1 quadratically:1 address:15 able:1 usually:1 below:1 pattern:2 program:21 including:1 memory:1 overlap:2 difficulty:1 predicting:3 zhu:1 executables:12 identifies:1 ready:1 b10:4 extract:2 finley:1 philadelphia:1 byte:36 text:5 interdependent:1 relative:1 sir:3 loss:65 par:1 discriminatively:2 highlight:1 interesting:3 analogy:5 validation:3 sufficient:1 xp:1 displaying:1 story:1 share:1 translation:1 row:1 succinctly:1 course:2 placed:1 last:2 repeat:2 hex:1 populate:1 verbal:1 allow:1 explaining:1 neighbor:1 taking:2 tolerance:1 van:1 transition:1 valid:4 ignores:1 author:3 jump:12 san:1 approximate:2 observable:1 cutting:2 logic:2 keep:1 dealing:1 global:1 sequentially:1 incoming:1 b1:5 assumed:1 conclude:1 francisco:1 xi:6 subsequence:1 iterative:1 bay:1 why:2 table:7 decomposes:1 learn:5 terminate:1 robust:1 b11:2 ca:1 obtaining:1 complex:2 japanese:1 did:1 main:1 whole:2 noise:1 arise:1 nothing:1 terror:1 allowed:1 convey:1 crafted:1 intel:2 en:1 nphard:1 cubic:1 ny:1 slow:1 precision:1 trellis:4 position:34 fails:1 scheduler:1 pereira:2 exponential:1 candidate:3 mov:2 lie:1 down:1 specific:3 unigram:1 leventhal:1 desynchronization:1 svm:2 chun:1 intractable:1 false:1 execution:3 confuse:2 margin:2 nk:3 cx:1 intersection:1 simply:2 army:1 sender:1 absorbed:1 svmhmm:5 corresponds:1 truth:2 extracted:2 acm:1 conditional:8 goal:1 viewed:1 marked:1 towards:1 hard:2 change:2 included:1 specifically:1 except:3 typical:1 miss:2 called:3 total:3 pas:1 experimental:1 exception:2 select:1 formally:1 mark:1 latter:2 arises:1 violated:3 evaluate:1 d1:5
3,230
3,926
Convex Multiple-Instance Learning by Estimating Likelihood Ratio Fuxin Li and Cristian Sminchisescu Institute for Numerical Simulation, University of Bonn {fuxin.li,cristian.sminchisescu}@ins.uni-bonn.de Abstract We propose an approach to multiple-instance learning that reformulates the problem as a convex optimization on the likelihood ratio between the positive and the negative class for each training instance. This is casted as joint estimation of both a likelihood ratio predictor and the target (likelihood ratio variable) for instances. Theoretically, we prove a quantitative relationship between the risk estimated under the 0-1 classification loss, and under a loss function for likelihood ratio. It is shown that likelihood ratio estimation is generally a good surrogate for the 0-1 loss, and separates positive and negative instances well. The likelihood ratio estimates provide a ranking of instances within a bag and are used as input features to learn a linear classifier on bags of instances. Instance-level classification is achieved from the bag-level predictions and the individual likelihood ratios. Experiments on synthetic and real datasets demonstrate the competitiveness of the approach. 1 Introduction Multiple Instance Learning (MIL) has been proposed over 10 years ago as a methodology to learn models under weak labeling constraints [1]. Unlike traditional binary classification problems, the positive items are represented as bags, which are sets of instances. A feature vector is used to represent each instance in the bag. There is an OR relationship in a bag: if one of the feature vectors is classified as positive, the entire bag is considered positive. A simple intuition is: one has a number of keys and faces a locked door. To enter the door, we only need one matching keys. MIL is a natural weak labeling formulation for text categorization [2] and computer vision problems [3]. In document classification, one is given files made of many sentences, and often only a few are useful. In computer vision, an image can be decomposed into different regions, and only some delineate objects. Therefore, MIL can be used in sophisticated tasks, such as identifying the location of object parts from bounding box information in images [4]. Although efforts have been made to provide datasets with increasingly more detailed supervisory information [5], without automation such a minutiae level of detail becomes prohibitive for large datasets, or more complicated data like video [6, 7]. In this case, one necessarily needs to resort to multiple-instance learning. MIL is interesting mainly because of its potential to provide instance-level labels from weak supervisory information. However the state-of-the-art in MIL is often obtained by simply using a weighted sum of kernel values between all instance pairs within the bags, while ignoring the prediction of instance labels [8, 9, 10]. It is intriguing why MIL algorithms that exploit instance level information cannot achieve better performance, as constraints at instance level seems abundant ? none of the negative instances is positive. This should provide additional constraints in defining the region of positive instances and should help classification in input space. A major challenge is the non-convexity of many instance-level MIL algorithms [2, 11, 12, 13, 14]. Most of these algorithms perform alternating minimization on the classifier and the instance weights. 1 This procedure usually gives only a local optimum since the objective is non-convex. The benchmark performance of MIL methods is overall quite similar, although techniques differ significantly: some assign binary weights to instances [2], some assign real weights [12, 13], yet others use probabilistic formulations [14]; some optimize using conventional alternating minimization, others use convexconcave procedures [11]. Gehler and Chapelle [15] have recently performed an interesting analysis of the MIL costs, where deterministic annealing (DA) was used to compute better local optima for several formulations. In the case of a previous mi-SVM formulation [2], annealing methods did not improve the performance significantly. A newly proposed algorithm, ALP-SVM, was also introduced, which used a preset parameter defining the fixed ratio of witnesses ? the true positive instances in a positive bag. Excellent results were obtained with this witness rate parameter set to the correct value. However, in practice it is unclear whether this can be known beforehand and whether it is stationary across different bags. In principle, the witness rate should also be estimated, and this learning stage partially account for the non-convexity of the MIL problem. It remains however unclear whether the observed performance variations are caused by non-convexity or by other modeling aspects. Although performance considerations have hindered the application of MIL to practical problems, the methodology has started to gain momentum recently [4, 16]. The success of the Latent SVM for person detection [4] shows that a standard MIL procedure (the reformulation of the alternating minimization MI-SVM algorithm in [2]) can achieve good results if properly initialized. However, proper initialization of MIL remains elusive in general, as it often requires engineering experience with the individual problem structure. Therefore, it is still of broad interest to develop an initializationindependent formulation for MIL. Recently Li et al. [17] proposed a convex instance-level MIL algorithm based on multiple kernel learning, where one kernel was used for each possible combination of instances. This creates an exponential number of constraints and requires a cutting-plane solver. Although the formulation is convex, its scalability drops significantly for bags with many instances. In this paper we make an alternative attempt towards a convex formulation: we establish that nonconvex MIL constraints can be recast reliably into convex constraints on the likelihood ratio between the positive and negative classes for each instance. We transform the multiple-instance learning problem into a convex joint estimation of the likelihood ratio function and the likelihood ratio values on training instances. The choice of the jointly convex loss function is rich, remarkably at least from a family of f-divergences. Theoretically, we prove consistency results for likelihood ratio estimation, thus showing that f-divergence loss functions upper bound the classification 0-1 loss tightly, unless the likelihood is very large. A support vector regression scheme is implemented to estimate the likelihood ratio, and it is shown to separate positive and negative instances well. However, determining the correct threshold for instance classification from the training set remain non-trivial. To address this problem, we propose a post-processing step based on a bag classifier computed as a linear combination of likelihood ratios. While this is shown to be suboptimal in synthetic experiments, it still achieves state-of-theart results in practical datasets, demonstrating the vast potential of the proposed approach. 2 Convex Reformulation of the Multiple Instance Constraint Let us consider a learning problem with n training instances in total, n+ positive and n? negative. In negative bags, every instance is negative, hence we do not separately define such bags ? instead we directly work with the instances. Let B = {B1 , B2 , . . . , Bk } be positive bags and X + = + ? ? + ? {x+ = {x? 1 , x2 , . . . , xn+ }, X 1 , x2 , . . . , xn? } be the training input, where each xi belongs to a ? positive bag Bj and each xi is a negative instance. The goal of multiple instance learning is, given {X + , X ? , B}, to learn a decision rule, sign(f (x)), to predict the label {+1, ?1} for the test instance x. The MIL problem can be characterized by two properties. 1) negative-exclusion: if none of the instances in a bag is positive, the bag is not positive. 2) positive-identifiability: if one of the instances in the bag is positive, the bag is positive. These properties are equivalent to a constraint convex since the negative max function maxxi ?Bj f (xi ) ? 0 on positive bags. This constraint is not P is concave. Reformulation into a sum constraint such as f (x) ? 0 would be convex, when 2 f (x) = wT x is linear [6]. However, this hardly retains positive-identifiability, since if there is only one xi with f (xi ) > 0, this can be superseded by other instances with f (xi ) < 0. Apparently, the distinction between the sum and the max operations is significant and difficult to ignore in this context. However, in this paper we show that if MIL conditions are formulated as constraints on the likelihood ratio, convexity can be achieved. For example, the constraint: X xi ?Bj Pr(y = 1|xi ) > |Bj | Pr(y = ?1|xi ) (1) can ensure both of the MIL properties. Positive-identifiability is satisfied when Pr(y = 1|xi ) ? |Bi | |Bi |+1 or equivalently, when the positive examples all have very large margin. |B | j can be too strong. When the size of the bag is large, the assumption Pr(y = 1|xi ) > |Bj |+1 Therefore, we exploit large deviation bounds to reduce the quantity |Bj |, such that Pr(y = 1|xi ) does not have to be very large to satisfy the constraint. Intuitively, if the examples are not very ambiguous, i.e. Pr(y = 1|xi ) is not close to 1/2, then likelihood ratio sums on negative examples can become much smaller, hence we can adopt a significantly lower threshold at some degree of violation of the negative-exclusion property. To this end, a common assumption is the low label noise [18, 19]: 1 M? : ?c > 0, ??, Pr(0 < |Pr(y = 1|xi ) ? | ? ?) ? c?? . 2 This assumes that the posterior Pr(y = 1|xi ) is usually not very close to 1/2, meaning that most examples are not very ambiguous, which is usually reasonable. In [18, 19, 20], a number of results have been obtained implying that classifiers learned under this assumption converge to the Bayes error much faster than the conventional empirical process rate O(n?1/2 ) of most standard classifiers, and can be as fast as O(n?1 ). These theoretical results show that low label noise assumptions indeed supports learning with fewer observations. Assuming M? holds, we prove the following result which allows us to relax the hard constraint (1): Theorem 1 ?? > 0, for each xi in a bag Bj , assume yi is drawn i.i.d. from the distribution PrBj (yi |xi ) that satisfies M? . If all instances xi ? Bj are negative, then the probability that s X Pr(y = 1|xi ) 4? + 1 ?+4 log 1/? ? |Bj |+ |Bj | log 1/?+ (2) 2 Pr(y = ?1|xi ) 2(? + 1)(? + 2) 2(? + 1) (2? + 3) 3 xi ?Bj is at most ?. The proof is given in an accompanying technical report [21]. From Theorem 1, we could weaken the constraint (1) to obtain constraint (2) and still ensure negative-exclusion with probability 1 ? ?. When ? is large, the reduction is significant. q For example, for ? = 2 and ? = 0.05, the right3 1 hand side of (2) is approximately 4 |Bi | + 14 |Bi | + 1, which is an important decrease over |Bi |, whenever |Bi | ? 3. Note that the i.i.d. assumption in Theorem 1 applies to each bag. Different bags can have different label distributions. This is often a significantly weaker assumption than the ones based on global i.i.d. of labels [8]. 3 Likelihood Ratio Estimation To estimate the likelihood ratio, one possibility would be to use kernel methods as nonparametric estimators over a RKHS. This approach was taken in [22], where predictions of the ratio provided a variational estimate of an f -divergence (or Ali-Silvey divergence) between two distributions. The formulation is powerful, yet not immediately applicable here. In our case, because of the uncertainty in the positive examples, Pr(y = 1|x) is not observed but has to be estimated. Therefore we need to optimize jointly as minf,Pr(y=1|x) D(f, Pr(y = 1|x)) + ?||f ||2 with loss function D(f, g). This optimization would not be convex if a framework in [22] were taken. 3 The requirement to estimate two sets of variables simultaneously (e.g. f and Pr(y = 1|x) here), is one of the major difficulties in turning multiple-instance learning into a convex problem. Approaches based on classification-style loss functions lead to non-convex optimization [2, 13]. However, since we are outside a classification setting, we can optimize over divergence measures D? (f, g) that are convex w.r.t. both f and g. These measures are common. For example, the f -divergence family that includes many statistical distances, satisfies the following properties [23]: 2 P i) |xi ? yi |; ?2 : D(x, y) = i (xi ?y ; xi P Kullback-Leibler : D(x, y) = i xi log xi ? xi log yi ? xi + yi ; (3) P Symmetric Kullback-Leibler : D(x, y) = i (yi ? xi ) log yi + (xi ? yi ) log xi ? xi + yi L1 : D(x, y) = P i In principle, any of the measures given above can be used to estimate the likelihood ratio. An important issue is the relationship between the likelihood ratio estimation and our final goal: binary classification. In [20], the authors give necessary and sufficient conditions for Bayes consistent learners by minimizing the mean of a surrogate loss function of the data. In this paper we extend these results to loss functions for likelihood ratio estimation. Let R(f ) = P (sign(y) 6= sign(f (x) ? 1)) be the 0-1 risk of a likelihood estimator f , with classification rule given by sign(f (x) ? 1). The Bayes risk is then R? = inf f R(f ). For a generic loss function C(?, ?), let ? = Pr(y = 1|x), we can define the C-risk as RC (f ) = ? E(C(f, ?)) and RC = inf f RC (f ). Our goal is to bound the excess 0-1 risk R(f ) ? R? by the ? excess-C risk RC (f ) ? RC , so that minimizing the excess-C risk can be converted into minimizing the classification loss. Let us further define the optimal conditional risk as H(?) = inf ??R C(?, ?), and H ? (?) = inf ?,(??1)(2??1)?0 C(?, ?). We say C(?, ?) is classification-calibrated if for any ? 6= 1/2, H ? (?) > H(?). Then, we define the ?-transform of C(?, ?) as ?(?) = ???? (?), where 1+? ?? ? ?(?) = H ? ( 1+? is the Fenchel-Legendre biconjugate of g, which 2 ) ? H( 2 ), ? ? [?1, 1], and g is essentially the largest convex lower bound of g [20]. The difference between likelihood ratio estimation and the classification setting is in the asymmetric scaling of the loss function for positive and negative examples. Let ?? = ?(?x), R? (f ) = ? ? Pr(y = ?1, f (x) > 1), R? = inf f R? (f ), R+ (f ) = Pr(y = 1, f (x) < 1) and R+ = inf f R+ (f ) be the risk and Bayes risks on negative and positive examples, respectively. It is easy to prove that ? ? R(f ) ? R? = R? (f ) ? R? + R+ (f ) ? R+ . We derived the following theorem: Theorem 2 a) For any nonnegative loss function C(?, ?), any measurable f : X ? R, and any ? ? ? probability distribution on X ?{?1}, ?? (R? (f )?R? )+?(R+ (f )?R+ ) ? RC (f )?RC . b) The following conditions are equivalent: (1) C is classification-calibrated; (2) For any sequence (?i ) in [0, 1], ?(?i ) ? 0 if and only if ?i ? 0; (3) For every sequence of measurable functions fi : X ? R ? and every probability distribution on X ? {?1}, RC (fi ) ? RC implies R(fi ) ? R? . The proof is given in an accompanying technical report [21]. This suggests that if ? is well-behaved, minimizing RC (f ) still gives a reasonable surrogate for the classification risk. Compared against ? Theorem 3 in [20] which has the form ?(R(f ) ? R? ) ? RC (f ) ? RC , the difference here stems from the different loss transforms used for the positive and the negative examples. ? We consider an f -divergence of the likelihood as the loss function, i.e., C(?, ?) = D(?, 1?? ), ? where 1?? is the likelihood ratio when the Pr(y = 1|x) = ?. From convexity arguments, it can be ? easily seen that H(?) = C( ? , ?) = 0 and H ? (?) = D(1, ? ), therefore ?(?) = D(1, 1+? ). 1?? 1?? 1?? The ? for all the loss functions listed in (4) can be computed accordingly. In fig. 3 (a) we show the ? ?(?) of L1 and the KL-divergence from (4) and compare it against the hinge loss (where ?(?) = |?| [20]) used for SVM classification. It could be seen that our approximation of the classification loss is accurate when Pr(yi = 1|xi ) is small. However, likelihood estimation would severely penalize the misclassified positive examples with large Pr(yi = 1|xi ). This suggests that for the joint estimation of f and Pr(yi = 1|xi ), the optimizer would tend to make Pr(yi = 1|xi ) smaller, in order to avoid high penalties, as shown in fig. 1(b). In fig. 1(a) we plot ? functions for different losses. We prefer an L1 measure as it is closer to the classification hinge loss, at least for the negative examples. In the end we solve the nonparametric function estimation in RKHS using an epsilon-insensitive L1 loss, which can be reformulated as 4 1.2 0.8 ?(?) 0.6 0.4 L1 divergence ?5 10 ?1 0.2 KL?divergence Hinge Loss ?0.5 0 ? 0.5 0 1 Positive Negative Undecided 1 Estimated Likelihood 0 10 1.2 Positive Negative 1 0.8 0.6 0.4 0.2 0 ?0.2 0 50 (a) 100 150 200 ?0.2 0 50 (b) 100 Example 150 (c) Figure 1: Loss functions and their influence on the estimation bias. (a) The function ? appearing in the losses used for likelihood estimation (L1 , KL-divergence) is similar to the hinge loss when ? > 0; however it goes to infinity as ? approaches 1. This deviation essentially means the surrogate loss is going to be extremely large if an example with very large Pr(yi = 1|xi ) is misclassified. (b) Example estimated likelihood for a synthetic example. The estimated likelihood is biased towards smaller values. However, with a fully labeled training set, the threshold can still be obtained. (c) If we only know the label of the negative examples (blue) and the maximal positive example (red), determining the optimal threshold becomes non-trivial. support vector regression on the conditional likelihood, with the additional MIL constraints in (2): P P + 2 min max(|f (x+ max(|f (x? j ) ? ?j | ? ?, 0) + j )| ? ?, 0) + ?||f || x+ x? + j j f,? P + + s.t. (4) x+ ?Bi ?j ? Di , ?j ? 0 j 2 where ||f || is the RKHS norm; Di is a constant for each bag and can be determined from Theorem P r(y=1|x+ ) 1, with appropriately chosen values for constants ? and ?; ?i+ is an estimate of P r(y=?1|xi + ) for the i training set. In this paper we q use ? = 2 and ? = 0.05, which gives the estimate of the bound for each bag as Di = 14 |Bi | + 3 14 |Bi | + 1, when Bi ? 3 and Di = |Bi | when |Bi | < 3. It can be proved that optimization problem (4) is jointly convex in both arguments. A standard representer theorem [24] would convert it to an optimization on vectors, which we omit here. The problem can be solved by different methods. The one easiest to implement is the alternating minimization P between solving for the SVM and projecting on the constraint sets given by x+ ?Bi yj+ ? Di and j yj+ ? 0. As this can turn out to be slow for large datasets, approaches such as the dual SMO or primal subgradient projection algorithms (in the case of linear SVM) can be used. In this paper we implement the alternating minimization approach, which is provably convergent since the optimization problem (4) is convex. In the accompanying technical report [21] we derive an SMO algorithm based on the dual of (4) and characterize the basic properties of the optimization problem. 4 Bag and Instance Classification If the likelihood ratio is obtained using an unbiased estimator, a decision rule based on sign(f (x) ? 1) should give the optimal classifier. However as previously argued, the joint estimation on f and ? + introduces a bias which is not always easy to identify. In positive bags, it is unclear whether an instance should be labeled positive or negative, as long as it does not contribute significantly to the classification error of its bag (fig. 3(b),(c)). In the synthetic experiments, we noticed that knowledge of the correct threshold would make the algorithm outperform competitors by a large margin (fig. 2). This means that based on the learned likelihood ratio, the positive examples are usually well separated from the negative ones. Developing a theory that would advance these aspects remains a promising avenue for future work. The main difficulty stems from the compound source of bias which arises from both the estimation of ? + and the loss minimization over ? + and f . Here we propose a partial solution. Instead of directly estimating the threshold, we learn a linear combination of instance likelihood ratios to classify the bag. First, we sort the instance likelihood ratios for each bag into a vector of length maxi |Bi |. We append 0 to bags that do not have enough 5 (a) (b) (c) Pattern Classification Error Bag Classification Error 1 0.35 0.45 AL?SVM : T=10C AW?SVM 0.35 BEST?THRESHOLD SVR?SVM 0.3 0.25 0.2 0.15 0.1 mi?SVM AW?SVM BEST?THRESHOLD 0.25 SVR?SVM 0.2 0.15 0.1 0.05 0.05 0 0.3 Error ? averaged over 50 runs Error ? averaged over 50 runs mi?SVM Estimated witness rate AL?SVM : T=10C 0.4 0.2 0.4 0.6 0.8 1 percent of positive labeled points in bags (d) 0 0.8 0.6 0.4 Witness Rate Estimate by SVR?SVM 0.2 0.2 0.4 0.6 0.8 percent of positives in bags (e) 1 0.2 0.4 0.6 0.8 Percent of positives in bags 1 (f) Figure 2: Synthetic dataset (best viewed in color). (a) The true decision boundary. (b) Training points at 40% witness rate. (c) The learned regression function. (d) Bag misclassification rate of different algorithms. (e) Instance misclassification rate of different algorithms. (f) Estimated witness rate and true witness rate. instances. Under this representation, bag classification turns into a standard binary decision problem where a vector and a binary label is given for each bag, and a linear SVM is learned to solve the problem. If we were to classify only the likelihood ratio on the first instance, this procedure would reduce to simple thresholding. We instead leverage information in the entire bag, aiming to constrain the classifier to learn the correct threshold. In this linear SVM setting, regularization never helps in practice and we always fix C to very large values. Effectively no parameter tuning is needed.1 To classify instances, a threshold is still necessary. In the current system, we follow a simple approach and take the mean between two instances: the one with the highest likelihood among training bags that are predicted negative by the bag classifier, and the lowest scored one among instances in positive bags with a score higher than the previous one. This approach is derived from the basic MIL assumption that all instances in a negative bag are negative. Based on instance classification we could also estimate the witness rate of the dataset. This is computed as the ratio of positively classified instances and the total number of instances in the positive bags of the training set. Since our algorithm automatically adjusts to different witness rates, this estimate offers quantitative insight as to whether MIL should be used. For instance, if the witness rate is 100%, it may be more effective to use a conventional learning approach. 5 Experiments 5.1 Synthetic Data We start with an experiment on the synthetic dataset of [15], where the controlled setting helps understanding the behavior of the proposed algorithm. This is a 2-D dataset with the actual decision boundary shown in fig. 2 (a). The positive bags have a fraction of points sampled uniformly from the white region and the rest sampled uniformly from the black region. An example of the sample at 40% witness rate is shown in fig. 2 (b). In this figure, the plotted instance labels are the ones of their bags ? indeed, one could notice many positive (blue) instances in the negative (red) region. 1 We have also experimented with a uniform threshold based on probabilistic estimates, as well as with predicting an instance-level threshold. While the former tends to underfit, the latter overfitts. Our bag-level classifier targets an intermediate level of granularity and turns out to be the most robust in our experiments. 6 Table 1: Performance of various MIL algorithms on weak labeling benchmarks. The best result on each dataset is shown in bold. The second group of algorithms either not provide instance labels (MI-Kernel and miGraph) or require a parameter that can be difficult to tune (ALP-SVM). SVRSVM appears to give consistent results among algorithms that provide instance labels. The row denoted ?Est. WR? gives the estimated witness rates of our method. Algorithm CH-FD EMDD mi-SVM MI-SVM MICA AW-SVM Ins-KI-SVM MI-Kernel miGraph ALP-SVM SVR-SVM Est. WR Musk-1 88.8 84.9 87.4 77.9 84.4 85.7 84.0 88.0 ? 3.1 88.9 ? 3.3 86.3 87.9 ? 1.7 100 % Musk-2 85.7 84.8 83.6 84.3 90.5 83.8 84.4 89.3 ? 1.5 90.3 ? 2.6 86.2 85.4 ? 1.8 89.5 % Elephant 82.4 78.3 82.2 81.4 82.5 82.0 83.5 84.3 ? 1.6 86.8 ? 0.7 83.5 85.3 ? 2.8 37.8 % Tiger 82.2 72.1 78.4 84.0 82.0 83.0 82.9 84.2 ? 1.9 86.0 ? 2.8 86.0 79.8 ? 3.4 42.7 % Fox 60.4 56.1 58.2 57.8 62.0 63.5 63.4 60.3 ? 1.0 61.6 ? 1.6 66.0 63.0 ? 3.5 100 % In order to test the effect of witness rates, 10 different types of datasets are created by varying the rates over the range 0.1, 0.2, . . . , 1. In this experiment we fix the hyperparameters C = 5 and use a Gaussian kernel with ? = 1.We show a trained likelihood ratio function in fig. 2 (c), estimated on the dataset shown in fig. 2 (b). Under the likelihood ratio, the positive examples are well separated from negatives. This illustrates how our proposed approach converts multiple-instance learning into the problem of deciding a one-dimensional threshold. Complete results on datasets with different witness rates are shown in fig. 2 (d) and (e). We give both bag classification and instance classification results. Our approach is referred to as SVRSVM. BEST THRESHOLD refers to a method where the best threshold was chosen based on the full knowledge of training/test instance labels, i.e., the optimal performance our likelihood ratio estimator can achieve. Comparison is done with two other approaches, the mi-SVM in [2] and the AW-SVM from [15]. SVR-SVM generally works well when the witness rate is not very low. From instance classification, one can see that the original mi-SVM is only competitive when the witness rate is near 1 ? this situation is close to a supervised SVM. With a deterministic annealing approach in [15], AW-SVM and mi-SVM perform quite the opposite ? competitive when the witness rate is small but degrade when this is large. Presumably this is because deterministic annealing is initialized with the apriori assumption that datasets are multiple-instance i.e. has a small witness rate [15]. When the witness rate is large, annealing does not improve performance. On the contrary, the proposed SVR-SVM does not appear to be affected by the witness rate. With the same parameters used across all the experiments, the method self-adjusts to different witness rates. One could see the effect especially in fig. 2 (e): regardless of the witness rate, the instance error rate remains roughly the same. However, this is still inferior to our model based on the best threshold, which indicates that important room for improvement exists. 5.2 MIL Datasets The algorithm is evaluated on a number of popular MIL benchmarks. We use the common experimental setting, based on 10-fold cross-validation for parameter selection and we report the test results averaged over 10 trials. The results are shown in Table 1, together with other competitive methods in from the literature [12, 15, 10] (for some of these methods standard deviation estimates are not available). In our tests, the proposed SVR-SVM gives consistently good results among algorithms that provide instance-level labels. The only atypical case is Tiger, where the algorithm underperforms other methods. Overall, the performance of SVR-SVM is slightly worse than miGraph and ALP-SVM. But we note that results in ALP-SVM are obtained by tuning the witness rate to the optimal value, which may be difficult in practical settings. The slightly lower performance compared to miGraph suggests that we may be inferior in the bag classification step, which we already know is suboptimal. 7 Table 2: Results from 20 Newsgroups. The best result on each dataset is shown in bold, pairwise t-tests are performed to determine if the differences are statistically significantly. miGraph is dominating in 10 datasets, whereas SVR-SVM is dominating in 14. Dataset alt.atheism comp.graphics comp.windows.misc comp.ibm.pc.hardware comp.sys.mac.hardware comp.window.x misc.forsale rec.autos rec.motorcycles rec.sport.baseball rec.sport.hockey sci.crypt sci.electronics sci.med sci.space soc.religion.christian talk.politics.guns talk.politics.mideast talk.politics.misc talk.religion.misc MI-Kernel 60.2 ? 3.9 47.0 ? 3.3 51.0 ? 5.2 46.9 ? 3.6 44.5 ? 3.2 50.8 ? 4.3 51.8 ? 2.5 52.9 ? 3.3 50.6 ? 3.5 51.7 ? 2.8 51.3 ? 3.4 56.3 ? 3.6 50.6 ? 2.0 50.6 ? 1.9 54.7 ? 2.5 49.2 ? 3.4 47.7 ? 3.8 55.9 ? 2.8 51.5 ? 3.7 55.4 ? 4.3 miGraph [10] 65.5 ? 4.0 77.8 ? 1.6 63.1 ? 1.5 59.5 ? 2.7 61.7 ? 4.8 69.8 ? 2.1 55.2 ? 2.7 72.0 ? 3.7 64.0 ? 2.8 64.7 ? 3.1 85.0 ? 2.5 69.6 ? 2.1 87.1 ? 1.7 62.1 ? 3.9 75.7 ? 3.4 59.0 ? 4.7 58.5 ? 6.0 73.6 ? 2.6 70.4 ? 3.6 63.3 ? 3.5 miGraph (web) 82.0 ? 0.8 84.3 ? 0.4 70.1 ? 0.3 79.4 ? 0.8 81.0 ? 0 79.4 ? 0.5 71.0 ? 0 83.2 ? 0.6 70.9 ? 2.7 75.0 ? 0.6 92.0 ? 0 70.1 ? 0.8 94.0 ? 0 72.1 ? 1.3 79.4 ? 0.8 75.4 ? 1.2 72.3 ? 1.0 75.5 ? 1.0 72.9 ? 2.4 67.5 ? 1.0 SVR-SVM 83.5 ? 1.7 85.2 ? 1.5 66.9 ? 2.6 70.3 ? 2.8 78.0 ? 1.7 83.7 ? 2.0 72.3 ? 1.2 78.1 ? 1.9 75.6 ? 0.9 76.7 ? 1.4 89.3 ? 1.6 69.7 ? 2.5 91.5 ? 1.0 74.9 ? 1.9 83.2 ? 2.0 83.2 ? 2.7 73.7 ? 2.6 80.5 ? 3.2 72.6 ? 1.4 71.9 ? 1.9 Est. WR 1.83 % 5.19 % 2.23 % 2.42 % 4.58 % 5.36 % 4.29 % 2.75 % 2.86 % 4.31 % 6.52 % 3.22 % 4.29 % 5.23 % 3.64 % 3.30 % 3.23 % 3.88 % 2.82 % 2.87 % 5.3 Text Categorization The text datasets are taken from [10]. These data have the benefit of being designated to have a small witness rate. Thus they serve as a better MIL benchmark compared to the previous ones. These are derived from the 20 Newsgroups corpus, with 50 positive and 50 negative bags for each of the 20 news categories. Each positive bag has around 3% witness rate. We run 10-fold cross validation 10 times on each dataset and compute the average accuracy and standard deviations, C is fixed to 100, ? to 0.2. Authors of [10] reported recent results for this dataset on their website, which are vastly superior than the ones reported in the paper. Therefore, in Table 2 we included both results in the comparison, identified as miGraph (paper) and miGraph (website), respectively. Our SVR-SVM performs significantly better than MI-Kernel and miGraph (paper). It is comparable with miGraph (web), and offers a marginal improvement. It is interesting that even though we use a suboptimal second step, SVR-SVM fares well with the state-of-the-art. This shows the potential of methods based on likelihood ratio estimators for multiple instance learning. 6 Conclusion We have proposed an approach to multiple-instance learning based on estimating the likelihood ratio between the positive and the negative classes on instances. The MIL constraint is reformulated into a convex constraint on the likelihood ratio where a joint estimation of both the function and the target ratios on the training set is performed. Theoretically we justify that learning the likelihood ratio is Bayes-consistent and has desirable excess loss transform properties. Although we are not able to find the optimal classification threshold on the estimated ratio function, our proposed bag classifier based on such ratios obtains state-of-the-art results in a number of difficult datasets. In future work, we plan to explore transductive learning techniques in order to leverage the information in the learned ratio function and identify better threshold estimation procedures. Acknowledgements This work is supported, in part, by the European Commission, under a Marie Curie Excellence Grant MCEXT-025481. 8 References [1] Dietterich, T.G., Lathrop, R.H., Lozano-Perez, T.: Solving the multiple-instance problem with axis-parallel rectangles. Artificial Intelligence 89 (1997) 31?71 [2] Andrews, S., Tsochantaridis, I., Hofmann, T.: Support vector machines for multiple-instance learning. In: NIPS. (2003) 561?568 [3] Maron, O., Lozano-P?erez, T.: A framework for multiple-instance learning. In: NIPS. (1998) 570?576 [4] Felzenszwalb, P.F., McAllester, D.A., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. In: CVPR. (2008) [5] Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T.: Labelme: A database and webbased tool for image annotation. IJCV 77(1-3) (2008) 157?173 [6] Cour, T., Sapp, B., Nagle, A., Taskar, B.: Talking pictures: Temporal grouping and dialogsupervised person recognition. In: CVPR. (2010) [7] Zeisl, B., Leistner, C., Saffari, A., Bischof, H.: On-line semi-supervised multiple-instance boosting. In: CVPR. (2010) [8] G?artner, T., Flach, P.A., Kowalczyk, A., Smola, A.J.: Multi-instance kernels. In: ICML. (2002) [9] Tao, Q., Scott, S., Vinodchandran, N.V., Osugi, T.T.: Svm-based generalized multiple-instance learning via approximate box counting. In: ICML. (2004) [10] Zhou, Z.H., Sun, Y.Y., Li, Y.F.: Multi-instance learning by treating instances as non-i.i.d. samples. In: ICML. (2009) [11] Cheung, P.M., Kwok, J.T.: A regularization framework for multiple-instance learning. In: ICML. (2006) 193?200 [12] Fung, G., Dandar, M., Krishnapuram, B., Rao, R.B.: Multiple instance learning for computer aided diagnosis. In: NIPS. (2007) [13] Mangasarian, O., Wild, E.: Multiple instance classification via successive linear programming. Journal of Optimization Theory and Applications 137 (2008) 555?568 [14] Zhang, Q., Goldman, S.A., Yu, W., Fritts, J.E.: Content-based image retrieval using multipleinstance learning. In: ICML. (2002) 682?689 [15] Gehler, P., Chapelle, O.: Deterministic annealing for multiple-instance learning. In: AISTATS. (2007) [16] Doll?ar, P., Babenko, B., Belongie, S., Perona, P., Tu, Z.: Multiple component learning for object detection. In: ECCV. (2008) [17] Li, Y.F., Kwok, J.T., Tsang, I.W., Zhou, Z.H.: A convex method for locating regions of interest with multi-instance learning. In: ECML. (2009) [18] Mammen, E., Tsybakov, A.B.: Smooth discrimination analysis. Annals of Statistics 27 (1999) 1808?1829 [19] Tsybakov, A.B.: Optimal aggregation of classifiers in statistical learning. Annals of Statistics 32 (2004) 135?166 [20] Bartlett, P., Jordan, M.I., McAulliffe, J.: Convexity, classification and risk bounds. Journal of American Statistical Association 101 (2006) 138?156 [21] Li, F., Sminchisescu, C.: Convex multiple instance learning by estimating likelihood ratio. Technical report, Institute for Numerical Simulation, University of Bonn (November 2010) [22] Nguyen, X., Wainwright, M., Jordan, M.I.: Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization. In: NIPS. (2007) [23] Liese, F., Vajda, I.: Convex Statistical Distances. Teubner VG (1987) [24] Hofmann, T., Sch?olkopf, B., Smola, A.J.: Kernel methods in machine learning. The Annals of Statistics 36 (2008) 1171?1220 9
3926 |@word trial:1 seems:1 norm:1 flach:1 simulation:2 biconjugate:1 reduction:1 electronics:1 score:1 document:1 rkhs:3 current:1 babenko:1 yet:2 intriguing:1 numerical:2 hofmann:2 christian:1 drop:1 plot:1 treating:1 discrimination:1 stationary:1 implying:1 prohibitive:1 fewer:1 item:1 website:2 accordingly:1 plane:1 intelligence:1 sys:1 boosting:1 contribute:1 location:1 successive:1 zhang:1 rc:12 become:1 competitiveness:1 prove:4 ijcv:1 artner:1 wild:1 excellence:1 pairwise:1 theoretically:3 indeed:2 roughly:1 behavior:1 multi:3 freeman:1 decomposed:1 automatically:1 goldman:1 actual:1 window:2 solver:1 becomes:2 provided:1 estimating:5 minutia:1 lowest:1 easiest:1 temporal:1 quantitative:2 every:3 concave:1 classifier:11 ramanan:1 grant:1 omit:1 appear:1 positive:45 engineering:1 local:2 reformulates:1 tends:1 severely:1 aiming:1 approximately:1 black:1 initialization:1 webbased:1 suggests:3 multipleinstance:1 locked:1 bi:14 averaged:3 range:1 statistically:1 practical:3 yj:2 practice:2 implement:2 procedure:5 empirical:1 significantly:8 matching:1 projection:1 refers:1 krishnapuram:1 svr:12 cannot:1 close:3 selection:1 tsochantaridis:1 risk:13 context:1 influence:1 optimize:3 conventional:3 deterministic:4 equivalent:2 measurable:2 elusive:1 go:1 regardless:1 convex:24 identifying:1 immediately:1 rule:3 estimator:5 adjusts:2 insight:1 variation:1 annals:3 target:3 programming:1 recognition:1 rec:4 asymmetric:1 labeled:3 gehler:2 observed:2 database:1 taskar:1 solved:1 tsang:1 region:6 news:1 sun:1 decrease:1 highest:1 russell:1 intuition:1 convexity:6 trained:2 solving:2 ali:1 serve:1 creates:1 baseball:1 learner:1 easily:1 joint:5 represented:1 various:1 talk:4 undecided:1 separated:2 fast:1 effective:1 artificial:1 labeling:3 outside:1 quite:2 solve:2 dominating:2 say:1 relax:1 elephant:1 cvpr:3 statistic:3 transductive:1 transform:3 jointly:3 final:1 cristian:2 sequence:2 propose:3 maximal:1 tu:1 motorcycle:1 achieve:3 deformable:1 scalability:1 olkopf:1 cour:1 optimum:2 requirement:1 categorization:2 object:3 help:3 derive:1 develop:1 andrew:1 strong:1 soc:1 implemented:1 predicted:1 implies:1 differ:1 correct:4 vajda:1 alp:5 mcallester:1 saffari:1 argued:1 require:1 assign:2 fix:2 leistner:1 hold:1 accompanying:3 around:1 considered:1 deciding:1 presumably:1 bj:11 predict:1 major:2 achieves:1 adopt:1 optimizer:1 forsale:1 torralba:1 estimation:17 applicable:1 bag:54 label:14 largest:1 vinodchandran:1 tool:1 weighted:1 minimization:7 always:2 gaussian:1 avoid:1 zhou:2 varying:1 mil:27 mcext:1 derived:3 properly:1 improvement:2 consistently:1 likelihood:47 mainly:1 indicates:1 entire:2 perona:1 misclassified:2 going:1 tao:1 provably:1 overall:2 classification:32 issue:1 dual:2 among:4 denoted:1 musk:2 plan:1 art:3 apriori:1 marginal:1 never:1 broad:1 yu:1 icml:5 theart:1 minf:1 representer:1 future:2 others:2 report:5 few:1 simultaneously:1 divergence:12 tightly:1 individual:2 murphy:1 attempt:1 detection:2 interest:2 fd:1 possibility:1 violation:1 introduces:1 pc:1 primal:1 perez:1 silvey:1 accurate:1 beforehand:1 closer:1 partial:1 necessary:2 experience:1 unless:1 fox:1 initialized:2 abundant:1 plotted:1 theoretical:1 weaken:1 instance:87 fenchel:1 modeling:1 classify:3 rao:1 ar:1 retains:1 cost:1 mac:1 deviation:4 predictor:1 uniform:1 too:1 graphic:1 characterize:1 reported:2 commission:1 aw:5 synthetic:7 calibrated:2 person:2 probabilistic:2 together:1 vastly:1 satisfied:1 worse:1 resort:1 american:1 style:1 li:6 account:1 potential:3 converted:1 de:1 b2:1 bold:2 automation:1 includes:1 satisfy:1 caused:1 ranking:1 performed:3 teubner:1 apparently:1 red:2 start:1 bayes:5 sort:1 complicated:1 competitive:3 parallel:1 identifiability:3 annotation:1 aggregation:1 curie:1 accuracy:1 identify:2 weak:4 none:2 comp:5 ago:1 classified:2 whenever:1 against:2 competitor:1 crypt:1 proof:2 mi:13 di:5 gain:1 newly:1 proved:1 dataset:10 sampled:2 popular:1 knowledge:2 color:1 sapp:1 sophisticated:1 appears:1 higher:1 supervised:2 follow:1 methodology:2 formulation:8 done:1 delineate:1 box:2 evaluated:1 though:1 stage:1 smola:2 hand:1 web:2 multiscale:1 maron:1 fuxin:2 behaved:1 supervisory:2 effect:2 dietterich:1 true:3 unbiased:1 former:1 hence:2 regularization:2 lozano:2 alternating:5 symmetric:1 leibler:2 misc:4 white:1 convexconcave:1 self:1 inferior:2 ambiguous:2 liese:1 mammen:1 generalized:1 complete:1 demonstrate:1 performs:1 l1:6 percent:3 image:4 meaning:1 consideration:1 variational:1 recently:3 fi:3 mangasarian:1 common:3 superior:1 insensitive:1 extend:1 fare:1 association:1 significant:2 enter:1 tuning:2 consistency:1 erez:1 chapelle:2 posterior:1 exclusion:3 recent:1 belongs:1 inf:6 compound:1 nonconvex:1 binary:5 success:1 yi:14 seen:2 additional:2 converge:1 determine:1 migraph:11 semi:1 multiple:24 full:1 desirable:1 stem:2 smooth:1 technical:4 faster:1 characterized:1 offer:2 long:1 cross:2 retrieval:1 post:1 controlled:1 prediction:3 regression:3 basic:2 vision:2 essentially:2 represent:1 kernel:11 achieved:2 underperforms:1 penalize:1 whereas:1 remarkably:1 separately:1 annealing:6 source:1 appropriately:1 biased:1 rest:1 unlike:1 sch:1 file:1 tend:1 med:1 contrary:1 jordan:2 near:1 leverage:2 door:2 intermediate:1 granularity:1 easy:2 enough:1 counting:1 newsgroups:2 identified:1 suboptimal:3 hindered:1 reduce:2 mica:1 opposite:1 avenue:1 politics:3 whether:5 casted:1 bartlett:1 effort:1 penalty:1 locating:1 reformulated:2 hardly:1 generally:2 useful:1 detailed:1 listed:1 tune:1 transforms:1 nonparametric:2 tsybakov:2 hardware:2 category:1 outperform:1 notice:1 sign:5 estimated:11 wr:3 blue:2 diagnosis:1 affected:1 group:1 key:2 reformulation:3 threshold:18 demonstrating:1 drawn:1 marie:1 rectangle:1 vast:1 subgradient:1 fraction:1 year:1 sum:4 convert:2 run:3 powerful:1 uncertainty:1 family:2 reasonable:2 decision:5 prefer:1 scaling:1 comparable:1 bound:6 ki:1 convergent:1 fold:2 nonnegative:1 constraint:20 infinity:1 constrain:1 x2:2 bonn:3 aspect:2 argument:2 extremely:1 min:1 developing:1 designated:1 fung:1 combination:3 legendre:1 across:2 remain:1 increasingly:1 smaller:3 slightly:2 intuitively:1 projecting:1 pr:24 taken:3 remains:4 previously:1 turn:3 needed:1 know:2 end:2 available:1 operation:1 doll:1 kwok:2 generic:1 kowalczyk:1 appearing:1 alternative:1 original:1 assumes:1 ensure:2 hinge:4 exploit:2 epsilon:1 especially:1 establish:1 objective:1 noticed:1 already:1 quantity:1 traditional:1 surrogate:4 unclear:3 distance:2 separate:2 sci:4 degrade:1 gun:1 trivial:2 assuming:1 length:1 relationship:3 ratio:44 minimizing:4 equivalently:1 difficult:4 negative:31 append:1 reliably:1 proper:1 perform:2 upper:1 observation:1 datasets:12 benchmark:4 november:1 ecml:1 defining:2 witness:26 situation:1 introduced:1 bk:1 pair:1 kl:3 sentence:1 bischof:1 smo:2 distinction:1 learned:5 nip:4 address:1 able:1 usually:4 pattern:1 scott:1 challenge:1 recast:1 max:4 video:1 wainwright:1 misclassification:2 natural:1 difficulty:2 predicting:1 turning:1 scheme:1 improve:2 picture:1 axis:1 started:1 superseded:1 created:1 auto:1 emdd:1 text:3 understanding:1 literature:1 acknowledgement:1 determining:2 loss:29 fully:1 discriminatively:1 interesting:3 vg:1 validation:2 degree:1 sufficient:1 consistent:3 principle:2 thresholding:1 ibm:1 row:1 eccv:1 penalized:1 supported:1 side:1 weaker:1 bias:3 institute:2 face:1 felzenszwalb:1 benefit:1 boundary:2 xn:2 rich:1 author:2 made:2 nguyen:1 functionals:1 excess:4 approximate:1 obtains:1 uni:1 cutting:1 ignore:1 kullback:2 global:1 b1:1 corpus:1 belongie:1 xi:38 latent:1 why:1 table:4 hockey:1 promising:1 learn:5 robust:1 ignoring:1 sminchisescu:3 excellent:1 necessarily:1 european:1 da:1 did:1 aistats:1 main:1 bounding:1 noise:2 scored:1 underfit:1 hyperparameters:1 atheism:1 positively:1 fig:11 referred:1 slow:1 momentum:1 exponential:1 atypical:1 mideast:1 maxxi:1 theorem:8 showing:1 maxi:1 experimented:1 svm:42 alt:1 grouping:1 exists:1 effectively:1 illustrates:1 margin:2 simply:1 explore:1 religion:2 partially:1 sport:2 talking:1 applies:1 ch:1 satisfies:2 conditional:2 goal:3 formulated:1 viewed:1 cheung:1 towards:2 room:1 labelme:1 content:1 hard:1 tiger:2 included:1 determined:1 aided:1 uniformly:2 wt:1 preset:1 justify:1 total:2 lathrop:1 experimental:1 est:3 support:4 latter:1 arises:1
3,231
3,927
Distributionally Robust Markov Decision Processes Huan Xu ECE, University of Texas at Austin [email protected] Shie Mannor Department of Electrical Engineering, Technion, Israel [email protected] Abstract We consider Markov decision processes where the values of the parameters are uncertain. This uncertainty is described by a sequence of nested sets (that is, each set contains the previous one), each of which corresponds to a probabilistic guarantee for a different confidence level so that a set of admissible probability distributions of the unknown parameters is specified. This formulation models the case where the decision maker is aware of and wants to exploit some (yet imprecise) a-priori information of the distribution of parameters, and arises naturally in practice where methods to estimate the confidence region of parameters abound. We propose a decision criterion based on distributional robustness: the optimal policy maximizes the expected total reward under the most adversarial probability distribution over realizations of the uncertain parameters that is admissible (i.e., it agrees with the a-priori information). We show that finding the optimal distributionally robust policy can be reduced to a standard robust MDP where the parameters belong to a single uncertainty set, hence it can be computed in polynomial time under mild technical conditions. 1 Introduction Sequential decision making in stochastic dynamic environments, also called the ?planning problem,? is often modeled using a Markov Decision Process (MDP, cf [1, 2, 3]). In practice, parameter uncertainty ? the deviation of the model parameters from the true ones (reward r and transition probability p) ? often causes the performance of ?optimal? policies to degrade significantly [4]. Many efforts have been made to reduce such performance variation under the robust MDP framework (e.g., [5, 6, 7, 8, 9, 10]). In this context, it is assumed that the parameters can be any member of a known set (termed the uncertainty set), and solutions are ranked based on their performance under the (respective) worst parameter realizations. In this paper we extend the robust MDP framework to deal with probabilistic information on uncertain parameters. To motivate the problem, let us consider the following example. Suppose that an agent (car, plane, robot etc) wants to find a fastest path from the source location to the destination. If the passing time to area A is uncertain and can be very large, then the solution to robust MDP would tend to take a detour and avoid A. However, if it is further known that the passing time can be large only when some unusual event (whose chance is less than, say, 10%), such as a storm, happens, and otherwise the passing time is reasonable, then avoiding A may be overly pessimistic. The statement ?the probability of the (uncertain) passing time being large is at most 10%? is important, and should be incorporated into the decision making paradigm. Indeed, it was observed that since the robust MDP framework ignores probabilistic information, it can provide conservative solutions [11, 12]. A different approach to embeding prior information is by adopting a Bayesian perspective on the parameters of the problem; see [11] and references therein. However, a complete Bayesian prior to the model parameters may be difficult to conjure as the decision maker may not have a reliable generative model to the uncertainty. For example, in the path planning problem above, the decision maker may not know how to assign probabilities to the model dynamics when a storm occurs. Our 1 approach offers a middle ground between the fully Bayesian approach and the robust approach: we want the decision maker to be able to use prior information but we do not require a complete Bayesian interpretation. We adapt the distributionally robust approach to MDPs under parameter uncertainty. The distributionally robust formulation has been extensively studied and broadly applied in single stage optimization problems to effectively incorporates a-priori probabilistic information of the unknown parameters (e.g., [13, 14, 15, 16, 17, 18]). In this framework, the uncertain parameters are regarded as stochastic, with a distribution ? that is not precisely observed, yet assumed to belong to an a-priori known set C. The objective is then formulated based on the worst-case analysis over distributions in C. That is, given a utility function u(x, ?) where x ? X is the optimizing variable and ? is   the unknown parameter, distributionally robust optimization solves maxx?X inf ??C E??? u(x, ?) . Indeed, such approach has also been developed in the mathematical finance community, usually in the static setup [19, 20]. Here the goal is to optimize a so-called coherent risk measure, which is shown to be equivalent to a distributionally robust formulation. From a decision theory perspective, the distributionally robust approach coincides with the celebrated MaxMin Expected Utility framework [21, 22], which states that if a preference relationship among actions satisfies certain axioms, then the optimal action maximizes the minimal expected utility with respect to a class of distributions. This approach addresses the famous neglect of probability cognitive bias [23], i.e., the tendency to completely disregard probability when making a decision under uncertainty. Two extreme cases of such biases are the normalcy bias, which roughly speaking, can be states as ?since a disaster has never occurred then it never will occur,? and the zero-risk bias, which stands for the tendency of individuals to prefer small benefits that are certain to large ones that are uncertain, regardless of the size of the ?certain? benefit and the expected magnitude of the uncertain one. It is easy to see that the nominal approach and the robust approach suffers from normalcy bias and zero-risk bias, respectively. We formulate and solve the distributionally robust MDP with respect to the nested uncertainty set. The nesting structure implies that there are n different levels of estimation, that is, Cs1 ? Cs2 ? ? ? ? Csn , representing the possible parameters of the problem. The probability that the parameters of state s belong to Csi is at least ?i . We also require the parameters to be state-wise independent (i.e., the uncertainty set is a product set over states). Policies are then ranked based on their expected performance under the (respective) most adversarial distribution. The main contribution of this paper is showing that for both the finite horizon case and the discounted reward infinite horizon case, such optimal policy satisfies a Bellman type equation, and can be solved via backward iteration. Motivating example. The nested-set formulation is motivated by the ?multi-scenario? setup, where in different scenarios the parameters are subject to different levels of uncertainty. For instance, in the path planning example, the uncertainty of the passing time of A can be modeled as a nested-set with two uncertainty sets: the parameters with at least 90% belong to a small uncertainty set corresponding to ?no storm,? and guaranteed to belong to a large worst-case uncertainty set representing ?storm? with probability of at most 10%. In fact the multi-layer formulations allows the decision maker to handle more than two scenarios. For example, a plane can encounter scenarios such as ?normal,? ?storm,? ?big storm,? and even ?volcano ashes,? each corresponding to a different level of parameter uncertainty. One appealing advantage of the nested-set formulation is that it does not require a precise description of the uncertainty, which leads to considerable flexibility. For example, if the uncertainty set of a robust MDP is not precisely known, then one can instead solve distributionally robust MDP with a 2-set formulation where the inner and the outer sets represent, respectively, an ?optimistic? estimation and a ?conservative? estimation. Additionally, the nested-set formulation also results from estimating the distributions of parameters via sampling. Such estimation is often imprecise especially when only a small number of samples is available. Instead, estimating uncertainty sets with high confidence can be made more accurate, and one can easily sharpen the approximation by incorporating more layers of confidence sets (i.e, increase n). 2 Preliminaries and Problem Setup A (finite) MDP is defined as a 6-tuple < T, ?, S, As , p, r > where: T is the possibly infinite decision horizon; ? ? (0, 1] is the discount factor; S is the finite state set; As is the finite action set of state s; p is the transition probability; and r is the expected reward. That is, for s ? S and a ? As , r(s, a) 2 is the expected reward and p(s? |s, a) is the probability to reach state s? . Following Puterman [1], we denote the set of all history-dependent randomized policies by ?HR , and the set of all Markovian randomized policies by ?MR . We use subscript s to denote the value associated with state s, e.g., rs denotes the vector form of rewards associated with state s, and ?s is the (randomized) action chosen at state s for policy ?. The elements in vector ps are listed in the following way: the transition probabilities of the same action are arranged in the same block, and inside each block they are listed according to the order of the next state. We use s to N denote the (random) state following s, and ?(s) to denote the probability simplex on A . We use to represent cartesian product, e.g., s N p = s?S ps . For a policy ?, we denote the expected (discounted) total-reward under parameters p, r by u(?, p, r), that is, T X u(?, p, r) , Ep ? i?1 r(si , ai )}. ?{ i=1 In this paper we propose and solve distributionally robust policy under parameter uncertainty, which incorporates a-prior information of how parameters are distributed. Suppose it is known that p and r follows some unknown distribution ? that belongs to a set CS . We evaluate each policy by its expected performance under the (respective) most adversarial distribution of the uncertain parameters, and a distributionally robust policy is the optimal policy according to this measure. Definition 1. A policy ? ? ? ? HR is distributionally robust with respect to CS if it satisfies that for all ? ? ?HR , Z Z inf ??CS u(?, p, r) d?(p, r) ? ?inf ? ?CS u(? ? , p, r) d?? (p, r). Next we specify the set of admissible distributions of uncertain parameters CS investigated in this paper. Let 0 = ?0 ? ?1 ? ?2 ? ? ? ? ? ?n = 1, and Ps1 ? Ps2 ? ? ? ? ? Psn for s ? S. We use the following set of distributions CS for our model. O CS , {?|? = ?s ; ?s ? Cs , ?s ? S}, s?S (1) n i where: Cs , {?s |?s (Ps ) = 1; ?s (Ps ) ? ?i , i = 1, ? ? ? , n ? 1}. We briefly explain this set of distributions. For a state s, the condition ?s (Psn ) = 1 means that the unknown parameters (ps , rs ) are restricted to the outermost uncertainty set; and the condition ?s (Psi ) ? ?i means that with probability at least ?i , (ps , rs ) ? Psi . Thus, Ps1 , ? ? ? , Psn provides probabilistic guarantees of (ps , rs ) for n different uncertainty sets (or equivalently confidence levN els). Note that s?S ?s stands for the product measure generated by ?s , which indicates that the parameters among different states are independent. Throughout this paper we make a standard assumption (cf [5, 6, 8]) that Psi is nonempty, convex and compact. 3 Distributionally robust MDPs: The finite-horizon case. In this section we show how to solve distributionally robust policies to MDPs having finitely many decision stages. We assume that when a state is visited multiple times, each time it can take a different parameter realization (non-stationary model). Equivalently, this means that multiple visits to a state can be treated as visiting different states, which leads to the Assumption 1 without loss of generality (by adding dummy states). Thus, we can partition S according to the stage each state belongs to, and let St be the set of states belong to tth stage. The non-stationary model is proposed in [5] because the stationary model is generally intractable and a lower-bound on it is given by the non-stationary model. Assumption 1. (i) Each state belongs to only one stage; (ii) the terminal reward equals zero; and (iii) the first stage only contains one state sini . We next define sequentially robust policies through a backward induction as a policy that is robust in every step for a standard robust MDP. We will later shows that sequentially robust policies are also distributionally robust by choosing the uncertainty set of the robust MDP carefully. Definition 2. Let T < ? and let Ps be the uncertainty set of state s. Define the following: 3 1. For s ? ST , the sequentially robust value v?T (s) , 0. 2. For s ? St where t < T , the sequentially robust value v?t (s) and sequentially robust action ? ?s are defined as n o s v?t (s) , max min Ep [r(s, a) + ?? v (s)] . t+1 ?s ?s ??(s) ? ?s ? arg max ?s ??(s) (ps ,rs )?Ps n min (ps ,rs )?Ps o s Ep [r(s, a) + ?? v (s)] . t+1 ?s 3. A policy ? ? ? is a sequentially robust policy w.r.t. Ps if ?s ? S, ? ?s? is a sequentially robust action. A standard game theoretic argument implies that sequentially robust actions, and hence sequentially robust policies, exist. Indeed, from the literature in robust MDP (cf [5, 7, 8]) it is easy N to see that a sequentially robust policy is the solution to the robust MDP where the uncertainty set is s Ps . The following theorem, which is the main result of this paper, shows that any sequentially robust policy (w.r.t. a specific uncertainty set) ? ? is distributionally robust. Theorem 1.N Let T < ?. Let Assumption 1 hold, and suppose that ? ? is a sequentially robust policy w.r.t. s P?s , where P?s = { n X (?i ? ?i?1 )(rs (i), ps (i))|(ps (i), rs (i)) ? Psi }. i=1 Then 1. ? ? is a distributionally robust policy with respect to Cs ; and 2. there exists ?? ? Cs such that (? ? , ?? ) is a saddle point. That is, Z Z Z ? ? ? sup u(?, p, r) d? (p, r) = u(? , p, r) d? (p, r) = inf u(? ? , p, r) d?(p, r). ??CS ???HR Therefore, to find the sequentially robust policy, we need only to solve the sequentially robust action. Theorem 2. Denote ?0 = 0. For s ? St where t < T , the sequentially robust action is given by n nX  i ? o i ?? Vs q , (r ) q + (p ) q? = arg max (?i ? ?i?1 ) i min (2) s s i i q??(s) i=1 (ps ,rs )?Ps ? t+1 is the vector form of v?t+1 (s? ) for all s? ? St+1 , and where m = |As |, v ? ? ? t+1 e? v 1 (m) ?. : V?s , ? ? ? t+1 em (m) v Theorem 2 implies that the computation of the sequentially robust action at a state s critically depends on the structure of the sets Psi . In fact, it can be shown that for ?good? uncertainty sets, computing the sequentially robust action is tractable. This claim is made precise by the following corollary. We omit the proof that is standard. Corollary 1. The sequentially robust action for state s can be found in polynomial-time, if for each i = 1, ? ? ? , n, Psi has a polynomial separation oracle. Here, a polynomial separation oracle of a convex set H ? Rn is a subroutine that given x ? Rn , reports in polynomial time whether x ? H, and if the answer is negative, it finds a hyperplane that separates x and H. 3.1 Proof of Theorem 1 We prove Theorem 1 in this section. The outline of the proof is as follows: We first show that for a given policy, the expected performance under an admissible ? depends only on the expected value N of the parameters. Then we show that the set of expected parameters is indeed s?S P?s . Thus the 4 N distributionally robust MDP reduces to the robust MDP with s?S P?s being the uncertainty set. Finally, by applying results from robust MDPs we prove the theorem. Some of the intermediate results are stated with proof omitted due to space constraints. Let ht denote a history up to stage t and s(ht ) denote the last state of history ht . We use ?ht (a) to represent the probability of choosing an action a at state s(ht ), following a policy ? and under a history ht . A t + 1 stage history, with ht followed by action a and state s? is written as (ht , a, s? ). With an abuse of notation, we denote the expected reward-to-go under a history as: u(?, p, r, ht ) , Ep ?{ T X ? i?t r(si , ai )|(s1 , a1 ? ? ? , st ) = ht }. i=t HR and ? ? CS (?), define w(?, RFor ? ? ? R ?, ht ) , E(p,r)?? us (?, p, r, h(t)) = u(?, p, r, h(t))d?(p, r). Thus, w(?, ?, (sini )) = u(?, p, r) d?(p, r) is the minimax objective. OneN can show that the following recursion formula for w(?) holds, due to the fact that ?(p, r) = s?S ?s (ps , rs ). Lemma 1. Fix ? ? ?HR , ? ? CS and a history ht where t < T , denote r = E? (r), p = E? (p), then we have: Z  X  X   w(?, ?, ht ) = ?ht (a) r s(ht ), a + ?p s? |s(ht ), a w ?, ?, (ht , a, s? ) d?s(ht ) (ps(ht ) , rs(ht ) ) s? ?S a?As(ht ) = X a?As(ht )   X   ?ht (a) r s(ht ), a + ?p s? |s(ht ), a w ?, ?, (ht , a, s? ) . s? ?S From Lemma 1, by backward induction, one can show the following lemma holds, which essentially means that for any policy, the expected performance under an admissible distribution ? only depends on the expected value of the parameters under ?. Thus, the distributionally robust MDP reduces to a robust MDP. HR Lemma 2. Fix and ? ? CS , denote p = E? (p) and r = E? (r). We have:  ? ? ? ini w ?, ?, (s ) = u(?, p, r). Next we characterize the set of expected value of the parameters. Lemma 3. Fix s ? S, we have {E?s (ps , rs )|?s ? Cs } = P?s . N ? Note that Lemma 3 implies that {E? (p, r)|? ? CS } = s?S Ps . We complete the proof of the Theorem 1 using the equivalence of distributionally robust MDPs and robust MDPs where the unN certainty set is s?S P?s . Recall that for each s ? S, P?s is convex and compact. It is well known that for robust MDPs, a saddle point of the minimax objective exists (cf [5, 8]). More precisely, N there exists ? ? ? ?HR , (p? , r? ) ? s?S P?s such that sup u(?, p? , r? ) = u(? ? , p? , r? ) = inf N (r,p)? ???HR s?S ?s P u(? ? , p, r). N ? ? ? Moreover, ? ? and (p? , r? ) can be constructed state-wise: ? ? = s?S ?s and (p , r ) = N ? ? ? ? ? s?S (ps , rs ), and for each s ? St , ?s , (ps , rs ) solves the following zero-sum game  s max min Ep vt+1 (s) . ? r(s, a) + ?? ?s (ps ,rs )?P?s It follows that ?s? is any sequentially robust action, and hence ? ? can be any sequentially robust policy. From Lemma 3, there exists ??s ? Cs that satisfies E??s (ps , rs ) = (p?s , r?s ). Let ?? = N ? ? s?S s . By Lemma 2 we have  sup w ?, ?? , (sini ) = sup u(?, p? , r? ); ???HR ? ? ???HR ini   w ? , ? , (s ) = u ? ? , p? , r? ;  inf w ? ? , ?, (sini ) = inf u(? ? , p, r). N ??CS (p,r)? 5 s ?s P    This leads to sup???HR w ?, ?? , (sini ) = w ? ? , ?? , (sini ) = inf ??CS w ? ? , ?, (sini ) . Thus, part (ii) of Theorem 1 holds. Note that part (ii) immediately implies part (i) of Theorem 1. Remark: Lemma 1 holds for a broader class of distribution sets than we discussed here. Indeed, the only requirement of C for Lemma 1 to hold is the state-wise decomposibility. Therefore, the results presented in this paper may well extend to distributionally robust MDPs whose parameters belongs to other interesting sets of distributions, such as a set of parametric distribution (Gaussian, exponential, binomial etc) with the distribution parameter not precisely determined. 4 Distributionally robust MDP: The discounted reward infinite horizon case. In this section we show how to compute a distributionally robust policy for infinite horizon MDPs. Specifically, we generalize the notion of sequentially robust policies to discounted-reward infinitehorizon MDPs, and show that it is distributionally robust in an appropriate sense. N Definition 3. Let T = ? and ? < 1. Denote the uncertainty set by P? = s P?s . We define the following: 1. The sequentially robust value v?? (s) w.r.t. P?s is the unique solution to the following set of equations: n o s v?? (s) = max min Ep v? (s)] , ?s ? S. ?s [r(s, a) + ?? ?s ??(s) ?s (ps ,rs )?P 2. The sequentially robust action w.r.t. P?s , ? ?s , is given by o n s ? ?s ? arg max min Ep [r(s, a) + ?? v (s)] . ? ?s ?s ??(s) ?s (ps ,rs )?P 3. A policy ? ? ? is a sequentially robust policy w.r.t. P?s if ?s ? S, ? ?s? is a sequentially robust action. The sequentially robust policy is well defined, since the following operator L : R|S| ? R|S| is a ? contraction for k ? k? norm. X X X {Lv}(s) , max min [ q(a)r(s, a) + ? q(a)p(s? |s, a)v(s? )]. ?s q??(s) (p,r)?P a?As s? ?S a?As Furthermore, given any v, applying L is equivalent to solving a minimax problem, which by Theorem 2 can be efficiently computed. Hence, by applying L on any initial v0 ? R|S| repeatedly, the ? exponentially fast. resulting value vector will converge to the sequentially robust value v Note that in the infinite horizon case, we cannot model the system as (1) having finitely many states, and (2) each visited at most once. In contrast, we have to relax either one of these two assumptions, leading to two different natural formulations. The first formulation, termed non-stationary model, is to treat the system as having infinitely many states, each visited at most once. Therefore, we consider an equivalent MDP with an augmented state space, where each augmented state is defined by a pair (s, t) where s ? S and t, meaning state s in the tth horizon. Thus, each augmented state will be visited at most once, which leads to the following set of distributions. O C?S? , {?|? = ?s,t ; ?s,t ? Cs , ?s ? S, ?t = 1, 2, ? ? ? }. s?S,t=1,2,??? The second formulation, termed stationary model, treats the system as having a finite number of states, while multiple visits to one state is allowed. That is, if a state s is visited for multiple times, then each time the distribution (of uncertain parameters) ?s is the same. Mathematically, we can adapt the augmented state space as in the non-stationary model, and requires that ?s,t does not depend on t. Thus, the set of admissible distributions is O C?S , {?|? = ?s,t ; ?s,t = ?s ; ?s ? Cs , ?s ? S, ?t = 1, 2, ? ? ? }. s?S,t=1,2,??? The next theorem is the main result of this section; it shows that a sequentially robust policy is distributionally robust to both stationary and non-stationary models. 6 N ? ? Theorem 3. Given T = ? and ? < 1, any sequentially robust policy w.r.t. s Ps where Ps = Pn i { i=1 (?i ? ?i?1 )(rs (i), ps (i))|(ps (i), rs (i)) ? Ps }, is distributionally robust with respect to C?S? , and with respect to C?S . Due to space constraints, we omit the proof details. The basic idea for proving the C?S? case is to consider a T?-truncated problem, i.e., a finite horizon problem that stops at stage T? with a termination reward v?? (?), and show that the optimal strategy for this problem, which is a sequential robust strategy, coincides with that of the infinite horizon one. Indeed, given any sequential robust strategy ? ? , one canRconstruct a stationary distribution ?? such that (? ? , ?? ) is a saddle point for sup???HR inf ??C?S? u(?, p, r) d?(p, r). The proof to C?S follows from C?S ? C?S? and ?? ? C?S . We remark that the decision maker is allowed to take non-stationary strategies, although the distributionally robust solution is proven to be stationary. Before concluding this section, we briefly compare the stationary model and the non-stationary model. These two formulations model different setups: if the system, more specifically the distribution of uncertain parameters, evolves with time, then the non-stationary model is more appropriate; while if the system is static, then the stationary model is preferable. For any given policy, the worst expected performance under the non-stationary model provides a lower bound to that of the stationary model since C?S ? C?S? . Thus, one can use the non-stationary model to approximate the stationary model, when the latter is intractable (e.g., in the finite horizon case; see Nilim and El Ghaoui [5]). When the horizon approaches infinity, such approximation becomes exact, as we showed in this section, the optimal solutions to both formulations coincide, and can be computed by iteratively solving a minimax problem. 5 Numerical simulations In this section we illustrate with numerical examples that by incorporating additional probabilistic information, the distributional robust approach handles uncertainty in a more flexible way, which often leads to a better performance than the nominal approach and the robust approach. We consider a path planning problem: an agent wants to exit a 4 ? 21 maze (shown in Figure 1) using the least possible time. Starting from the upper-left corner, the agent can move up, down, left and right, but can only exit the grid at the lower-right corner. Here, a white box stands for a normal place where the agent needs one time unit to pass through. A shaded box represents a ?shaky? place. To be more specific, we consider two setups. The first one is the uncertain cost case, where the true (yet unknown to the planning agent) time for the agent to pass through a ?shaky? place equals ?(?), and e ?(?) is an exponential distributed random variable with parameter ?. The three x = 1+e approaches are formulated as follows: the nominal approach takes the most likely value (i.e., 1) as the parameter; the robust approach takes [1, 1 + 3/?] as the uncertainty set; and the distributional robust approach takes into account the additional information that Pr(x ? [1, 1 + log 2/?]) ? 0.5 and Pr(x ? [1, 1 + 2 log 2/?]) ? 0.75. We vary 1/?, and test these approaches using 300 runs for each parameter set. The results are reported in Figure 2 (a). Figure 1: The maze for the path planning problem. The second case is the uncertain transition case: if an agent reaches a ?shaky? place, then the transition becomes unpredictable ? in the next step with probability 20% it will make an (unknown) 7 jump. The three approaches are set as follows: The nominal approach neglects this random jump. The robust approach takes a worst-case analysis, i.e., it assumes that with 20% the agent will jump to the spot with the highest cost-to-go. The distributionally robust approach takes into account an additional information that if a jump happens, the probability that it jumps to a spot that is left to the current place is no more than ?. Each policy is tested over 300 runs, while the true jump is set as with probability 0.2? the agent returns to the starting point (?reboot?), with 0.2(1 ? ?) the agent stay in the current position for a time unit (?stuck?). The results are reported in Figure 2 (b). (a) Uncertain cost (b) Uncertain transition 90 90 nominal robust dis. rob. 80 70 Time to Exit Time to Exit 70 60 50 40 60 50 40 30 30 20 20 10 nominal robust dis. rob. 80 0 1 2 3 4 5 6 7 8 10 9 1/? 0.1 0.2 0.3 0.4 0.5 ? 0.6 0.7 0.8 0.9 1 Figure 2: Simulation results of the path planning problem. In both the uncertain cost and the uncertain transition probability setups, the distributionally robust approach outperforms the other two approach over virtually the whole range of parameters. This is well expected, since additional probabilistic information is available to and incorporated by the distributionally robust approach. 6 Concluding remarks In this paper we proposed a distributionally robust approach to mitigate the conservatism of the robust MDP framework and incorporate additional a-prior probabilistic information regarding the unknown parameters. In particular, we considered the nested-set structured parameter uncertainty to model a-prior probabilistic information of the parameters. We proposed to find a policy that achieves maximum expected utility under the worst admissible distribution of the parameters. Such formulation leads to a policy that is obtained through a Bellman type backward induction, and can be solved in polynomial time under mild technical conditions. A different perspective on our work is that we develop a principled approach to the problem of uncertainty set design in multi-stage decision problems. It has been observed that shrinking the uncertainty set in single-stage problems leads to better performance. We provide a principled approach to the problem of uncertainty set selection: the distributionally robust policy is a robust policy w.r.t. a carefully designed single uncertainty set that depends on the a-priori knowledge. A natural question is how can we take advantage of the distributionally robust approach and solve (exactly) a full-blown Bayesian generative model MDP? The problem with taking an increasingly refined nested uncertainty structure (i.e., increasing n) is that of representation: the equivalent robust MDP uncertainty set may become too complicated to represent efficiently. Nevertheless, if it is possible to offer upper and lower bounds on the probability of each nested sets (based on the generative model), the corresponding distributionally robust policies provide performance bounds on the optimal policies in the, often intractable, Bayesian model. Acknowledgements We thank an anonymous reviewer for pointing out relevant references in mathematical finance. H. Xu would like to acknowledge the support from DTRA grant HDTRA1-08-0029. S. Mannor would like to acknowledge the support the Israel Science Foundation under contract 890015. 8 References [1] M. L. Puterman. Markov Decision Processes. John Wiley & Sons, New York, 1994. [2] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [3] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [4] S. Mannor, D. Simester, P. Sun, and J. Tsitsiklis. Bias and variance in value vunction estimation. In Proceedings of the 21th international conference on Machine learning, 2004. [5] A. Nilim and L. El Ghaoui. Robust control of Markov decision processes with uncertain transition matrices. Operations Research, 53(5):780?798, September 2005. [6] A. Bagnell, A. Ng, and J. Schneider. Solving uncertain Markov decision problems. Technical Report CMU-RI-TR-01-25, Carnegie Mellon University, August 2001. [7] C. C. White III and H. K. El Deib. Markov decision processes with imprecise transition probabilities. Operations Research, 42(4):739?748, July 1992. [8] G. N. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257?280, 2005. [9] L. G. Epstein and M. Schneider. Learning under ambiguity. Review of Economic Studies, 74(4):1275?1303, 2007. [10] A. Nilim and L. El Ghaoui. Robustness in Markov decision problems with uncertain transition matrices. In Advances in Neural Information Processing Systems 16, 2004. [11] E. Delage and S. Mannor. Percentile optimization for Markov decision processes with parameter uncertainty. Operations Research, (1):203?213, 2010. [12] H. Xu and S. Mannor. The robustness-performance tradeoff in Markov decision processes. In B. Sch?olkopf, J. C. Platt, and T. Hofmann, editors, Advances in Neural Information Processing Systems 19, pages 1537?1544. MIT Press, 2007. [13] H. Scarf. A min-max solution of an inventory problem. In Studies in Mathematical Theory of Inventory and Production, pages 201?209. Stanford University Press, 1958. [14] J. Dupacov?a. The minimax approach to stochastic programming and an illustrative application. Stochastics, 20:73?88, 1987. [15] P. Kall. Stochastic programming with recourse: Upper bounds and moment problems, a review. In Advances in Mathematical Optimization. Academie-Verlag, Berlin, 1988. [16] A. Shapiro. Worst-case distribution analysis of stochastic programs. Mathematical Programming, 107(1):91?96, 2006. [17] I. Popescu. Robust mean-covariance solutions for stochastic optimization. Operations Research, 55(1):98?112, 2007. [18] E. Delage and Y. Ye. Distributional robust optimization under moment uncertainty with applications to data-driven problems. To appear in Operations Research, 2010. [19] A. Ruszczy?nski. Risk-averse dynamic programming for Markov decision processes. Mathematical Programming, Series B, 125:235?261, 2010. [20] H. F?ollmer and A. Schied. Stochastic finance: An introduction in discrete time. Berlin: Walter de Gruyter, 2002. [21] I. Gilboa and D. Schmeidler. Maxmin expected utility with a non-unique prior. Journal of Mathematical Economics, 18:141?153, 1989. [22] D. Kelsey. Maxmin expected utility and weight of evidence. Oxford Economic Papers, 46:425? 444, 1994. [23] J. Baron. Thinking and Deciding. Cambridge University Press, 2000. 9
3927 |@word mild:2 middle:1 briefly:2 polynomial:6 norm:1 termination:1 r:20 simulation:2 contraction:1 covariance:1 tr:1 moment:2 initial:1 celebrated:1 contains:2 series:1 outperforms:1 csn:1 current:2 si:2 yet:3 written:1 john:1 numerical:2 partition:1 hofmann:1 designed:1 v:1 stationary:20 generative:3 plane:2 provides:2 mannor:5 location:1 preference:1 mathematical:7 constructed:1 become:1 prove:2 inside:1 indeed:6 expected:21 roughly:1 planning:7 multi:3 terminal:1 bellman:2 discounted:4 unpredictable:1 increasing:1 abound:1 becomes:2 estimating:2 notation:1 moreover:1 maximizes:2 israel:2 reboot:1 developed:1 ps2:1 finding:1 guarantee:2 certainty:1 mitigate:1 every:1 finance:3 preferable:1 exactly:1 platt:1 control:1 unit:2 grant:1 omit:2 appear:1 bertsekas:1 before:1 engineering:1 treat:2 sutton:1 oxford:1 subscript:1 path:6 abuse:1 therein:1 studied:1 equivalence:1 shaded:1 fastest:1 range:1 unique:2 practice:2 block:2 spot:2 area:1 delage:2 axiom:1 maxx:1 significantly:1 imprecise:3 confidence:5 cannot:1 selection:1 operator:1 context:1 risk:4 applying:3 optimize:1 equivalent:4 reviewer:1 go:2 regardless:1 starting:2 economics:1 convex:3 formulate:1 immediately:1 nesting:1 regarded:1 proving:1 handle:2 notion:1 variation:1 suppose:3 nominal:6 exact:1 programming:7 academie:1 element:1 decomposibility:1 distributional:4 observed:3 ep:7 electrical:1 solved:2 worst:7 region:1 sun:1 averse:1 highest:1 csi:1 principled:2 environment:1 reward:12 dynamic:5 motivate:1 depend:1 solving:3 exit:4 completely:1 easily:1 walter:1 fast:1 choosing:2 refined:1 whose:2 stanford:1 solve:6 say:1 relax:1 otherwise:1 sequence:1 advantage:2 propose:2 product:3 relevant:1 realization:3 flexibility:1 description:1 olkopf:1 p:33 requirement:1 illustrate:1 develop:1 ac:1 finitely:2 solves:2 c:22 implies:5 rfor:1 stochastic:7 require:3 assign:1 fix:3 preliminary:1 anonymous:1 pessimistic:1 mathematically:1 hold:6 considered:1 ground:1 normal:2 deciding:1 claim:1 pointing:1 vary:1 achieves:1 omitted:1 estimation:5 maker:6 visited:5 utexas:1 agrees:1 mit:2 iyengar:1 gaussian:1 avoid:1 pn:1 barto:1 broader:1 corollary:2 indicates:1 contrast:1 adversarial:3 sense:1 dependent:1 el:5 scarf:1 subroutine:1 arg:3 among:2 flexible:1 priori:5 equal:2 aware:1 never:2 having:4 ng:1 sampling:1 once:3 represents:1 thinking:1 simplex:1 report:2 hdtra1:1 individual:1 extreme:1 accurate:1 tuple:1 huan:2 respective:3 detour:1 minimal:1 uncertain:21 instance:1 unn:1 markovian:1 cost:4 deviation:1 technion:2 too:1 motivating:1 characterize:1 reported:2 answer:1 nski:1 st:7 international:1 randomized:3 stay:1 probabilistic:9 destination:1 contract:1 ambiguity:1 possibly:1 cognitive:1 corner:2 leading:1 return:1 account:2 de:1 depends:4 later:1 optimistic:1 sup:6 complicated:1 contribution:1 il:1 baron:1 variance:1 efficiently:2 generalize:1 bayesian:6 famous:1 critically:1 history:7 explain:1 reach:2 suffers:1 definition:3 storm:6 naturally:1 associated:2 psi:6 proof:7 static:2 stop:1 recall:1 knowledge:1 car:1 kall:1 carefully:2 cs1:1 maxmin:3 specify:1 formulation:14 arranged:1 box:2 generality:1 furthermore:1 stage:11 epstein:1 scientific:1 mdp:23 ye:1 true:3 hence:4 iteratively:1 deal:1 puterman:2 white:2 game:2 illustrative:1 percentile:1 coincides:2 criterion:1 ini:2 outline:1 theoretic:1 complete:3 meaning:1 wise:3 exponentially:1 belong:6 extend:2 interpretation:1 discussed:1 occurred:1 mellon:1 cambridge:1 ai:2 grid:1 mathematics:1 sharpen:1 robot:1 v0:1 etc:2 showed:1 perspective:3 optimizing:1 inf:9 belongs:4 driven:1 termed:3 scenario:4 certain:3 verlag:1 vt:1 additional:5 dtra:1 mr:1 schneider:2 converge:1 paradigm:1 july:1 ii:3 multiple:4 full:1 reduces:2 technical:3 adapt:2 offer:2 visit:2 a1:1 neuro:1 basic:1 essentially:1 cmu:1 iteration:1 represent:4 adopting:1 disaster:1 want:4 source:1 sch:1 sini:7 subject:1 tend:1 virtually:1 shie:2 member:1 incorporates:2 ee:1 intermediate:1 iii:2 easy:2 ps1:2 reduce:1 inner:1 idea:1 regarding:1 economic:2 tradeoff:1 texas:1 whether:1 motivated:1 utility:6 effort:1 passing:5 cause:1 speaking:1 action:18 remark:3 repeatedly:1 york:1 generally:1 listed:2 discount:1 extensively:1 tth:2 reduced:1 shapiro:1 exist:1 blown:1 overly:1 dummy:1 broadly:1 carnegie:1 discrete:1 nevertheless:1 ht:26 backward:4 sum:1 run:2 volcano:1 uncertainty:39 place:5 throughout:1 reasonable:1 separation:2 decision:25 prefer:1 layer:2 bound:5 guaranteed:1 followed:1 oracle:2 occur:1 precisely:4 psn:3 constraint:2 infinity:1 ri:1 argument:1 min:8 concluding:2 department:1 structured:1 according:3 em:1 increasingly:1 son:1 appealing:1 stochastics:1 evolves:1 making:3 happens:2 s1:1 rob:2 restricted:1 pr:2 ghaoui:3 recourse:1 equation:2 deib:1 nonempty:1 know:1 tractable:1 unusual:1 available:2 operation:6 appropriate:2 robustness:3 encounter:1 denotes:1 binomial:1 cf:4 assumes:1 schied:1 neglect:2 exploit:1 especially:1 objective:3 move:1 question:1 occurs:1 ruszczy:1 parametric:1 strategy:4 bagnell:1 visiting:1 september:1 separate:1 thank:1 berlin:2 schmeidler:1 athena:1 outer:1 nx:1 degrade:1 mail:1 induction:3 modeled:2 relationship:1 equivalently:2 difficult:1 setup:6 onen:1 statement:1 negative:1 stated:1 design:1 policy:45 unknown:8 upper:3 markov:11 finite:8 acknowledge:2 truncated:1 incorporated:2 precise:2 rn:2 august:1 community:1 pair:1 specified:1 coherent:1 address:1 able:1 usually:1 program:1 reliable:1 max:8 event:1 ranked:2 treated:1 natural:2 hr:13 recursion:1 representing:2 minimax:5 mdps:10 popescu:1 ollmer:1 prior:7 literature:1 acknowledgement:1 review:2 fully:1 loss:1 interesting:1 proven:1 lv:1 ash:1 foundation:1 agent:10 editor:1 production:1 austin:1 last:1 gilboa:1 dis:2 tsitsiklis:2 bias:7 taking:1 benefit:2 distributed:2 outermost:1 transition:10 stand:3 maze:2 ignores:1 stuck:1 made:3 jump:6 coincide:1 reinforcement:1 approximate:1 compact:2 sequentially:29 assumed:2 additionally:1 robust:97 inventory:2 investigated:1 conservatism:1 main:3 big:1 whole:1 allowed:2 xu:4 augmented:4 simester:1 wiley:1 cs2:1 shrinking:1 nilim:3 position:1 exponential:2 admissible:7 theorem:13 formula:1 down:1 specific:2 showing:1 evidence:1 incorporating:2 intractable:3 exists:4 sequential:3 effectively:1 adding:1 magnitude:1 cartesian:1 horizon:12 saddle:3 infinitely:1 likely:1 infinitehorizon:1 nested:9 corresponds:1 chance:1 satisfies:4 gruyter:1 goal:1 formulated:2 considerable:1 infinite:6 determined:1 specifically:2 hyperplane:1 lemma:10 conservative:2 total:2 called:2 pas:2 ece:1 tendency:2 disregard:1 distributionally:34 support:2 latter:1 arises:1 incorporate:1 evaluate:1 tested:1 avoiding:1
3,232
3,928
t-Logistic Regression Nan Ding2 , S.V. N. Vishwanathan1,2 Departments of 1 Statistics and 2 Computer Science Purdue University [email protected], [email protected] Abstract We extend logistic regression by using t-exponential families which were introduced recently in statistical physics. This gives rise to a regularized risk minimization problem with a non-convex loss function. An ef?cient block coordinate descent optimization scheme can be derived for estimating the parameters. Because of the nature of the loss function, our algorithm is tolerant to label noise. Furthermore, unlike other algorithms which employ non-convex loss functions, our algorithm is fairly robust to the choice of initial values. We verify both these observations empirically on a number of synthetic and real datasets. 1 Introduction Many machine learning algorithms minimize a regularized risk [1]: m J(?) = ?(?) + Remp (?), where Remp (?) = 1 ? l(xi , yi , ?). m i=1 (1) Here, ? is a regularizer which penalizes complex ?; and Remp , the empirical risk, is obtained by averaging the loss l over the training dataset {(x1 , y1 ), . . . , (xm , ym )}. In this paper our focus is on binary classi?cation, wherein features of a data point x are extracted via a feature map ? and the label is usually predicted via sign(??(x), ??). If we de?ne the margin of a training example (x, y) as u(x, y, ?) := y ??(x), ??, then many popular loss functions for binary classi?cation can be written as functions of the margin. Examples include1 l(u) = 0 if u > 0 and 1 otherwise . l(u) = max(0, 1 ? u) l(u) = exp(?u) l(u) = log(1 + exp(?u)) (0 ? 1 loss) (Hinge Loss) (Exponential Loss) (Logistic Loss). (2) (3) (4) (5) The 0 ? 1 loss is non-convex and dif?cult to handle; it has been shown that it is NP-hard to even approximately minimize the regularized risk with the 0 ? 1 loss [2]. Therefore, other loss functions can be viewed as convex proxies of the 0 ? 1 loss. Hinge loss leads to support vector machines (SVMs), exponential loss is used in Adaboost, and logistic regression uses the logistic loss. Convexity is a very attractive property because it ensures that the regularized risk minimization problem has a unique global optimum [3]. However, as was recently shown by Long and Servedio [4], learning algorithms based on convex loss functions are not robust to noise2 . Intuitively, the convex loss functions grows at least linearly with slope |l? (0)| as u ? (??, 0), which introduces the overwhelming impact from the data with u ? 0. There has been some recent and some notso-recent work on using non-convex loss functions to alleviate the above problem. For instance, a recent manuscript by [5] uses the cdf of the Guassian distribution to de?ne a non-convex loss. 1 We slightly abuse notation and use l(u) to denote l(u(x, y, ?)). Although, the analysis of [4] is carried out in the context of boosting, we believe, the results hold for a larger class of algorithms which minimize a regularized risk with a convex loss function. 2 1 In this paper, we continue this line of inquiry and propose a non-convex loss function which is ?rmly grounded in probability theory. By loss extending logistic regression from the exLogistic exp ponential family to the t-exponential fam6 ily, a natural extension of exponential family Hinge of distributions studied in statistical physics [6?10], we obtain the t-logistic regression 4 algorithm. Furthermore, we show that a simple block coordinate descent scheme can be used to solve the resultant regularized 2 0-1 loss risk minimization problem. Analysis of this procedure also intuitively explains why tmargin logistic regression is able to handle label -4 -2 0 2 4 noise. Figure 1: Some commonly used loss functions for binary Our paper is structured as follows: In sec- classi?cation. The 0-1 loss is non-convex. The hinge, expotion 2 we brie?y review logistic regression nential, and logistic losses are convex upper bounds of the especially in the context of exponential fam- 0-1 loss. ilies. In section 3, we review t-exponential families, which form the basis for our proposed t-logistic regression algorithm introduced in section 4. In section 5 we utilize ideas from convex multiplicative programming to design an optimization strategy. Experiments that compare our new approach to existing algorithms on a number of publicly available datasets are reported in section 6, and the paper concludes with a discussion and outlook in section 7. Some technical details as well as extra experimental results can be found in the supplementary material. 2 Logistic Regression Since we build upon the probabilistic underpinnings of logistic regression, we brie?y review some salient concepts. Details can be found in any standard textbook such as [11] or [12]. Assume we are given a labeled dataset (X, Y) = {(x1 , y1 ), . . . , (xm , ym )} with the xi ?s drawn from some domain X and the labels yi ? {?1}. Given a family of conditional distributions parameterized by ?, using Bayes rule, and making a standard iid assumption about the data allows us to write p(? | X, Y) = p(?) m ? i=1 p(yi | xi ; ?)/p(Y | X) ? p(?) m ? i=1 p(yi | xi ; ?) (6) where p(Y | X) is clearly independent of ?. To model p(yi | xi ; ?), consider the conditional exponential family of distributions p(y| x; ?) = exp (??(x, y), ?? ? g(? | x)) , (7) g(? | x) = log (exp (??(x, +1), ??) + exp (??(x, ?1), ??)) . (8) with the log-partition function g(? | x) given by If we choose the feature map ?(x, y) = that p(y| x; ?) is the logistic function p(y| x; ?) = y 2 ?(x), and denote u = y ??(x), ?? then it is easy to see 1 exp(u/2) = . exp(u/2) + exp(?u/2) 1 + exp(?u) (9) By assuming a zero mean isotropic Gaussian prior N (0, ?1? I) for ?, plugging in (9), and taking logarithms, we can rewrite (6) as m ? ? 2 ? log p(? | X, Y) = ??? + log (1 + exp (?yi ??(xi ), ??)) + const. . 2 i=1 (10) Logistic regression computes a maximum a-posteriori (MAP) estimate for ? by minimizing (10) as a function of ?. Comparing (1) and (10) it is easy to see that the regularizer employed in logistic 2 regression is ?2 ??? , while the loss function is the negative log-likelihood ? log p(y| x; ?), which thanks to (9) can be identi?ed with the logistic loss (5). 2 3 t-Exponential family of Distributions In this section we will look at generalizations of the log and exp functions which were ?rst introduced in statistical physics [6?9]. Some extensions and machine learning applications were presented in [13]. In fact, a more general class of functions was studied in these publications, but for our purposes we will restrict our attention to the so-called t-exponential and t-logarithm functions. The t-exponential function expt for (0 < t < 2) is de?ned as follows: ? exp(x) if t = 1 expt (x) := 1/(1?t) otherwise. [1 + (1 ? t)x]+ (11) where (?)+ = max(?, 0). Some examples are shown in Figure 2. Clearly, expt generalizes the usual exp function, which is recovered in the limit as t ? 1. Furthermore, many familiar properties of exp are preserved: expt functions are convex, non-decreasing, non-negative and satisfy expt (0) = 1 [9]. But expt does not preserve one very important property of exp, namely expt (a + b) ?= expt (a) ? expt (b). One can also de?ne the inverse of expt namely logt as ? log(x) if t = 1 ? (12) logt (x) := ? 1?t ? 1 /(1 ? t) otherwise. x Similarly, logt (ab) ?= logt (a) + logt (b). From Figure 2, it is clear that expt decays towards 0 more slowly than the exp function for 1 < t < 2. This important property leads to a family of heavy tailed distributions which we will later exploit. t = 1 (logistic) expt t = 1.5 exp(x) 7 logt t?0 6 t = 0.5 t = 0.5 5 t = 1.3 log(x) 2 4 t = 1.6 t?0 t = 1.5 1 t = 1.9 3 x 0 2 -1 1 2 3 4 5 6 7 0-1 loss 1 -2 -3 -2 -1 0 1 2 x -3 -4 -2 loss 6 4 2 0 2 4 margin Figure 2: Left: expt and Middle: logt for various values of t indicated. The right ?gure depicts the t-logistic loss functions for different values of t. When t = 1, we recover the logistic loss Analogous to the exponential family of distributions, the t-exponential family of distributions is de?ned as [9, 13]: p(x; ?) := expt (??(x), ?? ? gt (?)) . (13) qt (x; ?) := p(x; ?)t /Z(?) (14) A prominent member of the t-exponential family is the Student?s-t distribution [14]. Just like in the exponential family case, gt the log-partition function ensures that p(x; ?) is normalized. However, no closed form solution exists for computing gt exactly in general. A closely related distribution, which often appears when working with t-exponential families is the so-called escort distribution [9, 13]: where Z(?) = integrates to 1. ? p(x; ?) dx is the normalizing constant which ensures that the escort distribution t Although gt (?) is not the cumulant function of the t-exponential family, it still preserves convexity. In addition, it is very close to being a moment generating function ?? gt (?) = Eqt (x;?) [?(x)] . (15) The proof is provided in the supplementary material. A general version of this result appears as Lemma 3.8 in Sears [13] and a version specialized to the generalized exponential families appears as Proposition 5.2 in [9]. The main difference from ?? g(?) of the normal exponential family is that now ?? gt (?) is equal to the expectation of its escort distribution qt (x; ?) instead of p(x; ?). 3 4 Binary Classi?cation with the t-exponential Family In t-logistic regression we model p(y| x; ?) via a conditional t-exponential family distribution p(y| x; ?) = expt (??(x, y), ?? ? gt (? | x)) , (16) where 1 < t < 2, and compute the log-partition function gt by noting that expt (??(x, +1), ?? ? gt (? | x)) + expt (??(x, ?1), ?? ? gt (? | x)) = 1. (17) Even though no closed form solution exists, one can compute gt given ? and x using numerical techniques ef?ciently. The Student?s-t distribution can be regarded as a counterpart of the isotropic Gaussian prior in the t-exponential family [14]. Recall that a one dimensional Student?s-t distribution is given by ??(v+1)/2 ? (x ? ?)2 ?((v + 1)/2) 1 + St(x|?, ?, v) = ? , (18) v? v??(v/2)? 1/2 where ?(?) denotes the usual Gamma function and v > 1 so that the mean is ?nite. If we select t satisfying ?(v + 1)/2 = 1/(1 ? t) and denote, ??2/(v+1) ? ?((v + 1)/2) , ?= ? v??(v/2)? 1/2 then by some simple but tedious calculation (included in the supplementary material) ? ? ?)2 /2 ? g?t ) St(x|?, ?, v) = exp (??(x (19) t 2? ?= where ? (t ? 1)v? and g?t = ??1 . t?1 Therefore, we work with the Student?s-t prior in our setting: p(?) = d ? p(?j ) = j=1 d ? j=1 St(?j |0, 2/?, (3 ? t)/(t ? 1)). (20) Here, the degree of freedom for Student?s-t distribution is chosen such that it also belongs to the expt family, which in turn yields v = (3 ? t)/(t ? 1). The Student?s-t prior is usually preferred to the Gaussian prior when the underlying distribution is heavy-tailed. In practice, it is known to be a robust3 alternative to the Gaussian distribution [16, 17]. As before, if we let ?(x, y) = y2 ?(x) and plot the negative log-likelihood ? log p(y| x; ?), then we no longer obtain a convex loss function (see Figure 2). Similarly, ? log p(?) is no longer convex when we use the Student?s-t prior. This makes optimizing the regularized risk challenging, therefore we employ a different strategy. Since logt is also a monotonically increasing function, instead of working with log, we can equivalently work with the logt function (12) and minimize the following objective function: m ? ? J(?) = ? logt p(?) p(yi | xi ; ?)/p(Y | X) 1 = t?1 ? i=1 p(?) m ? i=1 p(yi | xi ; ?)/p(Y | X) ?1?t + 1 , 1?t (21) where p(Y | X) is independent of ?. Using (13), (18), and (11), we can further write m ? d ? ?? ? ? ?y ? i ? 2 /2 ? g?t ) ? ?(xi ), ? ? gt (? | xi )) +const. . 1 + (1 ? t)(??? 1 + (1 ? t)( J(?) ? j 2 ?? ?? ? i=1 ? ? j=1 ? rj (?) = d ? j=1 rj (?) m ? li (?) li (?) + const. (22) i=1 3 There is no unique de?nition of robustness. For example, one of the de?nitions is through the outlierproneness [15]: p(? | X, Y, xn+1 , yn+1 ) ? p(? | X, Y) as xn+1 ? ?. 4 Since t > 1, it is easy to see that rj (?) > 0 is a convex function of ?. On the other hand, since gt ? is convex and t > 1 it follows that li (?) > 0 is also a convex function of ?. In summary, J(?) is a product of positive convex functions. In the next section we will present an ef?cient optimization strategy for dealing with such problems. 5 Convex Multiplicative Programming In convex multiplicative programming [18] we are interested in the following optimization problem: N ? min P(?) ? zn (?) s.t. ? ? Rd , (23) ? n=1 where zn (?) are positive convex functions. Clearly, (22) can be identi?ed with (23) by setting N = d+m and identifying zn (?) = rn (?) for n = 1, . . . , d and zn+d (?) = ln (?) for n = 1, . . . , m. The optimal solutions to the problem (23) can be obtained by solving the following parametric problem (see Theorem 2.1 of Kuno et al. [18]): N N ? ? ?n zn (?) s.t. ? ? Rd , ? > 0, ?n ? 1. (24) min min MP(?, ?) ? ? ? n=1 n=1 In logistic regression, The optimization problem ? ? in (24) is very reminiscent of logistic regression. ?? ? ? ln (?) = ? y2n ?(xn ), ? + g(? | xn ), while here ln (?) = 1 + (1 ? t) y2n ?(xn ), ? ? gt (? | xn ) . The key difference is that in t-logistic regression each data point xn has a weight (or in?uence) ?n associated with it. Exact algorithms have been proposed for solving (24) (for instance, [18]). However, the computational cost of these algorithms grows exponentially with respect to N which makes them impractical for our purposes. Instead, we apply a block coordinate descent based method. The main idea is to minimize (24) with respect to ? and ? separately. ?-Step: Assume that ? is ?xed, and denote z?n = zn (?) to rewrite (24) as: min ? N ? ?n z?n s.t. ? > 0, n=1 N ? n=1 (25) ?n ? 1. Since the objective function is linear in ? and the feasible region is a convex set, (25) is a convex optimization problem. By introducing a non-negative Lagrange multiplier ? ? 0, the partial Lagrangian and its gradient with respect to ?n? can be written as ? ? N N ? ? (26) L(?, ?) = ?n z?n + ? ? 1 ? ?n n=1 ? ? L(?, ?) = z?n? ? ? ?n . ? ??n ? n=1 (27) n?=n Setting the gradient to 0 obtains ? = ? z?n? K.K.T. conditions [3], we can conclude ?n . Since ?N that n=1 ?n n?=n? z?n? > 0, it follows that ? cannot be 0. By the = 1. This in turn implies that ? = z?n? ?n? or z1 , . . . , ?/? zN ), with ? = (?1 , . . . , ?N ) = (?/? N ? 1 z?nN . (28) n=1 Recall that ?n in (24) is the weight (or in?uence) of each term zn (?). The above analysis shows that ? = z?n (?)?n remains constant for all n. If z?n (?) becomes very large then its in?uence ?n is reduced. Therefore, points with very large loss have their in?uence capped and this makes the algorithm robust to outliers. ?-Step: In this step we ?x ? > 0 and solve for the optimal ?. This step is essentially the same as logistic regression, except that each component has a weight ? here. N ? min ?n zn (?) s.t. ? ? Rd . (29) ? n=1 5 This is a standard unconstrained convex optimization problem which can be solved by any off the shelf solver. In our case we use the L-BFGS Quasi-Newton method. This requires us to compute the gradient ?? zn (?): ? n ? en ?? zn (?) = ?? rn (?) = (t ? 1)?? ?y ? n ?(xn ) ? ?? gt (? | xn ) for n = 1, . . . , m ?? zn+d (?) = ?? ln (?) = (1 ? t) ? y2 ?? ?y n n = (1 ? t) ?(xn ) ? Eqt (yn | xn ;?) ?(xn ) , 2 2 where en denotes the d dimensional vector with one at the n-th coordinate and zeros elsewhere (n-th unit vector). qt (y| x; ?) is the escort distribution of p(y| x; ?) (16): for n = 1, . . . , d qt (y| x; ?) = p(y| x; ?)t . p(+1| x; ?)t + p(?1| x; ?)t (30) The objective function is monotonically decreasing and is guaranteed to converge to a stable point of P(?). We include the proof in the supplementary material. 6 Experimental Evaluation Our experimental evaluation is designed to answer four natural questions: 1) How does the generalization capability (measured in terms of test error) of t-logistic regression compare with existing algorithms such as logistic regression and support vector machines (SVMs) both in the presence and absence of label noise? 2) Do the ? variables we introduced in the previous section have a natural interpretation? 3) How much overhead does t-logistic regression incur as compared to logistic regression? 4) How sensitive is the algorithm to initialization? The last question is particularly important given that the algorithm is minimizing a non-convex loss. To answer the above questions empirically we use six datasets, two of which are synthetic. The Long-Servedio dataset is an arti?cially constructed dataset to show that algorithms which minimize a differentiable convex loss are not tolerant to label noise Long and Servedio [4]. The examples have 21 dimensions and play one of three possible roles: large margin examples (25%, x1,2,...,21 = y); pullers (25%, x1,...,11 = y, x12,...,21 = ?y); and penalizers (50%, Randomly select and set 5 of the ?rst 11 coordinates and 6 out of the last 10 coordinates to y, and set the remaining coordinates to ?y). The Mease-Wyner is another synthetic dataset to test the effect of label noise. The input x is a 20-dimensional vector where each coordinate is uniformly distributed on [0, 1]. The label y is ?5 +1 if j=1 xj ? 2.5 and ?1 otherwise [19]. In addition, we also test on Mushroom, USPS-N (9 vs. others), Adult, and Web datasets, which are often used to evaluate machine learning algorithms (see Table 1 in supplementary material for details). For simplicity, we use the identity feature map ?(x) = x in all our experiments, and set t ? {1.3, 1.6, 1.9} for t-logistic regression. Our comparators are logistic regression, linear SVMs4 , and an algorithm (the probit) which employs the probit loss, L(u) = 1 ? erf (2u), used in BrownBoost/RobustBoost [5]. We use the L-BFGS algorithm [21] for the ?-step in t-logistic regression. L-BFGS is also used to train logistic regression and the probit loss based algorithms. Label noise is added by randomly choosing 10% of the labels in the training set and ?ipping them; each dataset is tested with and without label noise. We randomly select and hold out 30% of each dataset as a validation set and use the rest of the 70% for 10-fold cross validation. The optimal parameters namely ? for t-logistic and ?logistic regression and C for SVMs is chosen by performing a grid search over the ? parameter space 2?7,?6,...,7 and observing the prediction accuracy over the validation set. The convergence criterion is to stop when the change in the objective function value is less than 10?4 . All code is written in Matlab, and for the linear SVM we use the Matlab interface of LibSVM [22]. Experiments were performed on a Qual-core machine with Dual 2.5 Ghz processor and 32 Gb RAM. In Figure 3, we plot the test error with and without label noise. In the latter case, the test error of t-logistic regression is very similar to logistic regression and Linear SVM (with 0% test error in 4 We also experimented with RampSVM [20], however, the results are worser than the other algorithms. We therefore report these results in the supplementary material. 6 6.0 TestError(%) 32 1.2 4.5 24 0.9 3.0 16 8 1.5 0.3 0 0.0 0.0 16.8 3.2 16.0 2.4 6.0 TestError(%) 0.6 4.5 3.0 1.6 15.2 1.5 0.8 14.4 0.0 logis. t=1.3 t=1.6 t=1.9 probit SVM logis. t=1.3 t=1.6 t=1.9 probit SVM 0.0 logis. t=1.3 t=1.6 t=1.9 probit SVM Figure 3: The test error rate of various algorithms on six datasets (left to right, top: Long-Servedio, Mease-Wyner, Mushroom; bottom: USPS-N, Adult, Web) with and without 10% label noise. All algorithms are initialized with ? = 0. The blue (light) bar denotes a clean dataset while the magenta (dark) bar are the results with label noise added. Also see Table 3 in the supplementary material. Long-Servedio and Mushroom datasets), with a slight edge on some datasets such as Mease-Wyner. When label noise is added, t-logistic regression (especially with t = 1.9) shows signi?cantly5 better performance than all the other algorithms on all datasets except the USPS-N, where it is marginally outperformed by the probit. To obtain Figure 4 we used the noisy version of the datasets, chose one of the 10 folds used in the previous experiment, and plotted the distribution of the 1/z ? ? obtained after training with t = 1.9. To distinguish the points with noisy labels we plot them in cyan while the other points are plotted in red. Analogous plots for other values of t can be found in the supplementary material. Recall that ? denotes the in?uence of a point. One can clearly observe that the ? of the noisy data is much smaller than that of the clean data, which indicates that the algorithm is able to effectively identify these points and cap their in?uence. In particular, on the Long-Servedio dataset observe the 4 distinct spikes. From left to right, the ?rst spike corresponds to the noisy large margin examples, the second spike represents the noisy pullers, the third spike denotes the clean pullers, while the rightmost spike corresponds to the clean large margin examples. Clearly, the noisy large margin examples and the noisy pullers are assigned a low value of ? thus capping their in?uence and leading to the perfect classi?cation of the test set. On the other hand, logistic regression is unable to discriminate between clean and noisy training samples which leads to bad performance on noisy datasets. Detailed timing experiments can be found in Table 4 in the supplementary material. In a nutshell, t-logistic regression takes longer to train than either logistic regression or the probit. The reasons are not dif?cult to see. First, there is no closed form expression for gt (? | x). We therefore resort to pre-computing it at some ?xed locations and using a spline method to interpolate values at other locations. Second, since the objective function is not convex several iterations of the ? and ? steps might be needed. Surprisingly, the L-BFGS algorithm, which is not designed to optimize nonconvex functions, is able to minimize (22) directly in many cases. When it does converge, it is often faster than the convex multiplicative programming algorithm. However, on some cases (as expected) it fails to ?nd a direction of descent and exits. A common remedy for this is the bundle L-BFGS with a trust-region approach. [21] Given that the t-logistic objective function is non-convex, one naturally worries about how different initial values affect the quality of the ?nal solution. To answer this question, we initialized the algorithm with 50 different randomly chosen ? ? [?0.5, 0.5]d , and report test performances of the various solutions obtained in Figure 5. Just like logistic regression which uses a convex loss and hence converges to the same solution independent of the initialization, the solution obtained 5 We provide the signi?cance test results in Table 2 of supplementary material. 7 300 1000 60 Frequency 240 800 45 180 600 120 30 400 60 15 200 0 0.0 0.2 0.4 0.6 0.8 0 0.0 1.0 0.2 0.4 0.6 0.8 1.0 0 0.0 0.2 0.4 0.2 0.4 0.6 0.8 1.0 0.6 0.8 1.0 600 1200 8000 900 6000 600 4000 300 2000 Frequency 450 300 150 0 0.0 0.2 0.4 0.6 0.8 1.0 0 0.0 0.2 0.4 ? 0.6 0.8 1.0 0 0.0 ? ? Figure 4: The distribution of ? obtained after training t-logistic regression with t = 1.9 on datasets with 10% label noise. Left to right, top: Long-Servedio, Mease-Wyner, Mushroom; bottom: USPSN, Adult, Web. The red (dark) bars (resp. cyan (light) bars) indicate the frequency of ? assigned to points without (resp. with) label noise. by t-logistic regression seems fairly independent of the initial value of ?. On the other hand, the performance of the probit ?uctuates widely with different initial values of ?. probit t = 1.9 t = 1.6 t = 1.3 logistic 0 10 20 30 0 10 20 30 40 0.00 0.15 0.30 0.45 probit t = 1.9 t = 1.6 t = 1.3 logistic 3.0 4.5 6.0 7.5 TestError(%) 9.0 15 18 21 TestError(%) 24 1.5 2.0 2.5 TestError(%) 3.0 3.5 Figure 5: The Error rate by different initialization. Left to right, top: Long-Servedio, Mease-Wyner, Mushroom; bottom: USPS-N, Adult, Web. 7 Discussion and Outlook In this paper, we generalize logistic regression to t-logistic regression by using the t-exponential family. The new algorithm has a probabilistic interpretation and is more robust to label noise. Even though the resulting objective function is non-convex, empirically it appears to be insensitive to initialization. There are a number of avenues for future work. On Long-Servedio experiment, if the label noise is increased signi?cantly beyond 10%, the performance of t-logistic regression may degrade (see Fig. 6 in supplementary materials). Understanding and explaining this issue theoretically and empirically remains an open problem. It will be interesting to investigate if t-logistic regression can be married with graphical models to yield t-conditional random ?elds. We will also focus on better numerical techniques to accelerate the ?-step, especially a faster way to compute gt . 8 References [1] Choon Hui Teo, S. V. N. Vishwanthan, Alex J. Smola, and Quoc V. Le. Bundle methods for regularized risk minimization. J. Mach. Learn. Res., 11:311?365, January 2010. [2] S. Ben-David, N. Eiron, and P.M. Long. On the dif?culty of approximately maximizing agreements. J. Comput. System Sci., 66(3):496?514, 2003. [3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, England, 2004. [4] Phil Long and Rocco Servedio. Random classi?cation noise defeats all convex potential boosters. Machine Learning Journal, 78(3):287?304, 2010. [5] Yoav Freund. A more robust boosting algorithm. Technical Report Arxiv/0905.2138, Arxiv, May 2009. [6] J. Naudts. Deformed exponentials and logarithms in generalized thermostatistics. Physica A, 316:323?334, 2002. URL http://arxiv.org/pdf/cond-mat/0203489. [7] J. Naudts. Generalized thermostatistics based on deformed exponential and logarithmic functions. Physica A, 340:32?40, 2004. [8] J. Naudts. Generalized thermostatistics and mean-?eld theory. Physica A, 332:279?300, 2004. [9] J. Naudts. Estimators, escort proabilities, and ?-exponential families in statistical physics. Journal of Inequalities in Pure and Applied Mathematics, 5(4), 2004. [10] C. Tsallis. Possible generalization of boltzmann-gibbs statistics. J. Stat. Phys., 52, 1988. [11] Christopher Bishop. Pattern Recognition and Machine Learning. Springer, 2006. [12] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer, New York, 2 edition, 2009. [13] Timothy D. Sears. Generalized Maximum Entropy, Convexity, and Machine Learning. PhD thesis, Australian National University, 2008. [14] Andre Sousa and Constantino Tsallis. Student?s t- and r-distributions: Uni?ed derivation from an entropic variational principle. Physica A, 236:52?57, 1994. [15] A O?hagan. On outlier rejection phenomena in bayes inference. Royal Statistical Society, 41 (3):358?367, 1979. [16] Kenneth L. Lange, Roderick J. A. Little, and Jeremy M. G. Taylor. Robust statistical modeling using the t distribution. Journal of the American Statistical Association, 84(408):881?896, 1989. [17] J. Vanhatalo, P. Jylanki, and A. Vehtari. Gaussian process regression with student-t likelihood. In Neural Information Processing System, 2009. [18] Takahito Kuno, Yasutoshi Yajima, and Hiroshi Konno. An outer approximation method for minimizing the product of several convex functions on a convex set. Journal of Global Optimization, 3(3):325?335, September 1993. [19] David Mease and Abraham Wyner. Evidence contrary to the statistical view of boosting. J. Mach. Learn. Res., 9:131?156, February 2008. [20] R. Collobert, F.H. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In W.W. Cohen and A. Moore, editors, Machine Learning, Proceedings of the Twenty-Third International Conference (ICML 2006), pages 201?208. ACM, 2006. [21] J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research. Springer, 1999. [22] C.C. Chang and C.J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/?cjlin/libsvm. [23] Fabian Sinz. UniverSVM: Support Vector Machine with Large Scale CCCP Functionality, 2006. Software available at http://www.kyb.mpg.de/bs/people/fabee/universvm. html. 9
3928 |@word deformed:2 version:3 middle:1 seems:1 nd:1 tedious:1 open:1 vanhatalo:1 arti:1 eld:2 naudts:4 outlook:2 moment:1 initial:4 series:1 rightmost:1 existing:2 recovered:1 comparing:1 mushroom:5 dx:1 written:3 reminiscent:1 numerical:3 partition:3 kyb:1 plot:4 designed:2 v:1 cult:2 isotropic:2 core:1 gure:1 boosting:3 location:2 fabee:1 org:1 constructed:1 overhead:1 theoretically:1 expected:1 mpg:1 decreasing:2 little:1 overwhelming:1 solver:1 increasing:1 becomes:1 provided:1 estimating:1 notation:1 underlying:1 xed:2 textbook:1 impractical:1 sinz:2 puller:4 nutshell:1 exactly:1 unit:1 yn:2 before:1 positive:2 timing:1 limit:1 mach:2 approximately:2 abuse:1 might:1 chose:1 initialization:4 studied:2 married:1 challenging:1 dif:3 tsallis:2 unique:2 practice:1 block:3 procedure:1 nite:1 empirical:1 boyd:1 pre:1 cannot:1 close:1 risk:9 context:2 optimize:1 www:2 map:4 lagrangian:1 phil:1 maximizing:1 attention:1 convex:37 simplicity:1 identifying:1 fam:1 pure:1 rule:1 estimator:1 regarded:1 vandenberghe:1 handle:2 coordinate:8 analogous:2 resp:2 play:1 exact:1 programming:4 us:3 agreement:1 element:1 satisfying:1 particularly:1 recognition:1 hagan:1 nitions:1 labeled:1 bottom:3 role:1 csie:1 solved:1 region:2 ensures:3 vehtari:1 cance:1 roderick:1 convexity:4 rewrite:2 solving:2 incur:1 upon:1 exit:1 basis:1 usps:4 accelerate:1 various:3 regularizer:2 derivation:1 train:2 sears:2 distinct:1 guassian:1 hiroshi:1 choosing:1 larger:1 solve:2 supplementary:11 widely:1 otherwise:4 statistic:2 erf:1 noisy:9 differentiable:1 propose:1 product:2 culty:1 scalability:1 rst:3 convergence:1 optimum:1 extending:1 generating:1 perfect:1 converges:1 ben:1 stat:2 measured:1 qt:4 predicted:1 expt:18 implies:1 signi:3 indicate:1 australian:1 direction:1 trading:1 closely:1 functionality:1 material:11 explains:1 generalization:3 alleviate:1 ntu:1 proposition:1 extension:2 physica:4 hold:2 wright:1 normal:1 exp:19 entropic:1 purpose:2 integrates:1 outperformed:1 label:20 sensitive:1 teo:1 minimization:4 clearly:5 gaussian:5 shelf:1 publication:1 derived:1 focus:2 jylanki:1 ily:1 likelihood:3 indicates:1 posteriori:1 inference:1 nn:1 quasi:1 interested:1 issue:1 dual:1 html:1 y2n:2 fairly:2 ipping:1 equal:1 represents:1 look:1 comparators:1 icml:1 future:1 np:1 others:1 report:3 spline:1 employ:3 randomly:4 preserve:2 gamma:1 interpolate:1 choon:1 national:1 familiar:1 ab:1 freedom:1 friedman:1 testerror:5 investigate:1 evaluation:2 introduces:1 light:2 bundle:2 underpinnings:1 edge:1 partial:1 taylor:1 logarithm:3 penalizes:1 initialized:2 plotted:2 re:2 uence:7 instance:2 increased:1 modeling:1 zn:12 yoav:1 cost:1 introducing:1 reported:1 answer:3 synthetic:3 thanks:1 st:3 international:1 cantly:1 probabilistic:2 physic:4 off:1 ym:2 thesis:1 choose:1 slowly:1 booster:1 resort:1 american:1 leading:1 li:3 potential:1 jeremy:1 de:8 bfgs:5 sec:1 student:9 satisfy:1 mp:1 collobert:1 multiplicative:4 later:1 performed:1 closed:3 view:1 observing:1 red:2 bayes:2 recover:1 capability:1 penalizers:1 slope:1 minimize:7 publicly:1 accuracy:1 yield:2 identify:1 generalize:1 iid:1 marginally:1 cation:6 processor:1 inquiry:1 phys:1 andre:1 ed:3 trevor:1 servedio:10 frequency:3 vishy:1 resultant:1 proof:2 associated:1 naturally:1 stop:1 dataset:9 popular:1 remp:3 thermostatistics:3 recall:3 cap:1 manuscript:1 appears:4 worry:1 adaboost:1 wherein:1 though:2 furthermore:3 just:2 smola:1 jerome:1 working:2 hand:3 web:4 trust:1 christopher:1 logistic:51 quality:1 indicated:1 qual:1 believe:1 grows:2 effect:1 verify:1 concept:1 normalized:1 counterpart:1 y2:2 multiplier:1 assigned:2 remedy:1 hence:1 moore:1 attractive:1 criterion:1 generalized:5 prominent:1 yajima:1 pdf:1 konno:1 interface:1 variational:1 ef:3 recently:2 common:1 specialized:1 empirically:4 cohen:1 exponentially:1 insensitive:1 defeat:1 extend:1 interpretation:2 slight:1 association:1 cambridge:2 gibbs:1 rd:3 unconstrained:1 grid:1 mathematics:1 similarly:2 stable:1 longer:3 gt:17 recent:3 optimizing:1 belongs:1 nonconvex:1 inequality:1 binary:4 continue:1 yi:8 nition:1 employed:1 converge:2 monotonically:2 rj:3 technical:2 faster:2 eqt:2 calculation:1 cross:1 long:11 england:1 lin:1 cccp:1 plugging:1 impact:1 prediction:1 regression:40 essentially:1 expectation:1 arxiv:3 iteration:1 grounded:1 preserved:1 addition:2 separately:1 logis:3 extra:1 rest:1 unlike:1 member:1 contrary:1 ciently:1 noting:1 presence:1 easy:3 xj:1 affect:1 hastie:1 restrict:1 brownboost:1 lange:1 idea:2 avenue:1 six:2 expression:1 gb:1 vishwanthan:1 url:1 york:1 matlab:2 clear:1 detailed:1 dark:2 svms:3 reduced:1 http:3 sign:1 eiron:1 tibshirani:1 blue:1 write:2 mat:1 key:1 salient:1 four:1 drawn:1 libsvm:3 clean:5 nal:1 kenneth:1 utilize:1 nocedal:1 ram:1 inverse:1 parameterized:1 family:22 cyan:2 bound:1 nan:1 guaranteed:1 distinguish:1 fold:2 alex:1 software:2 noise2:1 min:5 performing:1 x12:1 ned:2 department:1 structured:1 logt:10 smaller:1 slightly:1 tw:1 making:1 b:1 quoc:1 intuitively:2 outlier:2 ln:4 remains:2 turn:2 cjlin:1 needed:1 available:3 generalizes:1 operation:1 apply:1 observe:2 alternative:1 robustness:1 sousa:1 denotes:5 remaining:1 include:1 top:3 graphical:1 hinge:4 newton:1 const:3 exploit:1 especially:3 build:1 february:1 society:1 objective:7 question:4 added:3 spike:5 strategy:3 parametric:1 rocco:1 usual:2 september:1 gradient:3 unable:1 sci:1 outer:1 degrade:1 reason:1 assuming:1 code:1 minimizing:3 equivalently:1 robert:1 negative:4 rise:1 design:1 boltzmann:1 twenty:1 upper:1 observation:1 datasets:11 purdue:3 fabian:1 descent:4 january:1 y1:2 rn:2 introduced:4 david:2 namely:3 z1:1 identi:2 capped:1 able:3 adult:4 bar:4 usually:2 beyond:1 xm:2 pattern:1 max:2 royal:1 natural:3 regularized:8 scheme:2 wyner:6 library:1 ne:3 carried:1 concludes:1 review:3 prior:6 understanding:1 freund:1 loss:41 probit:11 interesting:1 validation:3 degree:1 proxy:1 ponential:1 principle:1 editor:1 constantino:1 heavy:2 elsewhere:1 summary:1 surprisingly:1 last:2 cially:1 explaining:1 taking:1 distributed:1 ghz:1 dimension:1 xn:12 computes:1 commonly:1 obtains:1 uni:1 preferred:1 dealing:1 global:2 tolerant:2 conclude:1 xi:10 search:1 tailed:2 why:1 table:4 nature:1 learn:2 robust:6 bottou:1 complex:1 domain:1 main:2 linearly:1 abraham:1 noise:16 edition:1 x1:4 mease:6 fig:1 cient:2 en:2 brie:2 depicts:1 fails:1 exponential:26 comput:1 third:2 capping:1 theorem:1 magenta:1 bad:1 bishop:1 decay:1 svm:5 experimented:1 normalizing:1 evidence:1 exists:2 effectively:1 hui:1 phd:1 margin:7 rejection:1 entropy:1 logarithmic:1 timothy:1 lagrange:1 escort:5 chang:1 springer:4 corresponds:2 extracted:1 cdf:1 acm:1 weston:1 conditional:4 viewed:1 identity:1 towards:1 absence:1 feasible:1 hard:1 change:1 included:1 except:2 uniformly:1 averaging:1 classi:6 lemma:1 called:2 discriminate:1 experimental:3 cond:1 kuno:2 select:3 support:4 people:1 latter:1 cumulant:1 evaluate:1 tested:1 phenomenon:1
3,233
3,929
Deep Coding Network Yuanqing Lin? Tong Zhang? Shenghuo Zhu? Kai Yu? ? NEC Laboratories America, Cupertino, CA 95129 ? Rutgers University, Piscataway, NJ 08854 Abstract This paper proposes a principled extension of the traditional single-layer flat sparse coding scheme, where a two-layer coding scheme is derived based on theoretical analysis of nonlinear functional approximation that extends recent results for local coordinate coding. The two-layer approach can be easily generalized to deeper structures in a hierarchical multiple-layer manner. Empirically, it is shown that the deep coding approach yields improved performance in benchmark datasets. 1 Introduction Sparse coding has attracted significant attention in recent years because it has been shown to be effective for some classification problems [12, 10, 9, 13, 11, 14, 2, 5]. In particular, it has been empirically observed that high-dimensional sparse coding plus linear classifier is successful for image classification tasks such as PASCAL 2009 [7, 15]. The empirical success of sparse coding can be justified by theoretical analysis [17], which showed that a modification of sparse coding with added locality constraint, called local coordinate coding (LCC), represents a new class of effective high dimensional non-linear function approximation methods with sound theoretical guarantees. Specifically, LCC learns a nonlinear function in high dimension by forming an adaptive set of basis functions on the data manifold, and it has nonlinear approximation power. A recent extension of LCC with added local tangent directions [16] demonstrated the possibility to achieve locally quadratic approximation power when the underlying data manifold is relatively flat. This also indicates that the nonlinear function approximation view of sparse coding not only yields deeper theoretical understanding of its success, but also leads to improved algorithms based on refined analysis. This paper follows the same idea, where we propose a principled extension of single-layer sparse coding based on theoretical analysis of a two level coding scheme. The algorithm derived from this approach has some advantages over the single-layer approach, and can also be extended into multi-layer hierarchical systems. Such extension draws connection to deep belief networks (DBN) [8], and hence we call this approach deep coding network. Hierarchical sparse coding has two main advantages over its single-layer counter-part. First, at the intuitive level, the first layer (traditional single-layer basis) yields a crude description of the data at each basis function, and multi-layer basis functions provide a natural way to zoom into each single basis for finer local details ? this intuition can be reflected more rigorously in our nonlinear function approximation result. Due to the more localized zoom-in effect, it also alleviates the problem of overfitting when many basis functions are needed. Second, it is computationally more efficient than flat coding because we only need to look at locations in the second (or higher) layer corresponding to basis functions with nonzero coefficients in the first (or previous) layer. Since sparse coding produces many zero coefficients, the hierarchical structure significantly eliminates many of the coding computation. Moreover, instead of fitting a single model with many variables as in a flat single layer approach, our proposal of multi-layer coding requires fitting many small models separately, each 1 with a small number of parameters. In particular, fitting the small models can be done in parallel, e.g. using Hadoop, so that learning a fairly big number of codebooks can still be fast. 2 Sparse Coding and Nonlinear Function Approximation This section reviews the nonlinear function approximation results of single-layer coding scheme in [17], and then presents our multi-layer extension. Since the result of [17] requires a modification of the traditional sparse coding scheme called local coordinate coding (LCC), our analysis will rely on a similar modification. Consider the problem of learning a nonlinear function f (x) in high dimension: x ? Rd with large d. While there are many algorithms in traditional statistics that can learn such a function in low dimension, when the dimensionality d is large compared to n, the traditional statistical methods will suffer the so called ?curse of dimensionality?. The recently popularized coding approach addresses this issue. Specifically, it was theoretically shown in [17] that a specific coding scheme called Local Coordinate Coding can take advantage of the underlying data manifold geometric structure in order to learn a nonlinear function in high dimension and alleviate the curse of dimensionality problem. The main idea of LCC, described in [17], is to locally embed points on the underlying data manifold into a lower dimensional space, expressed as coordinates with respect to a set of anchor points. The main theoretical observation was relatively simple: it was shown in [17] that on the data manifold, a nonlinear function can be effectively approximated by a globally linear function with respect to the local coordinate coding. Therefore the LCC approach turns a very difficult high dimensional nonlinear learning problem into a much simpler linear learning problem, which can be effectively solved using standard machine learning techniques such as regularized linear classifiers. This linearization is effective because the method naturally takes advantage of the geometric information. In order to describe the results more formally, we introduce a number of notations. First we denote by k ? k the Euclidean norm (2-norm) on Rd : q kxk = kxk2 = x21 + ? ? ? + x2d . Definition 2.1 (Smoothness Conditions) A function f (x) on Rd is (?, ?, ?) Lipschitz smooth with respect to a norm k ? k if k?f (x)k ? ?, and and f (x? ) ? f (x) ? ?f (x)? (x? ? x) ? ?kx? ? xk2 , f (x? ) ? f (x) ? 0.5(?f (x? ) + ?f (x))? (x? ? x) ??kx ? x? k3 , where we assume ?, ?, ? ? 0. These conditions have been used in [16], and they characterize the smoothness of f under zero-th, first, and second order approximations. The parameter ? is the Lipschitz constant of f (x), which is finite if f (x) is Lipschitz; in particular, if f (x) is constant, then ? = 0. The parameter ? is the Lipschitz derivative constant of f (x), which is finite if the derivative ?f (x) is Lipschitz; in particular, if ?f (x) is constant (that is, f (x) is a linear function of x), then ? = 0. The parameter ? is the Lipschitz Hessian constant of f (x), which is finite if the Hessian of f (x) is Lipschitz; in particular, if the Hessian ?2 f (x) is constant (that is, f (x) is a quadratic function of x), then ? = 0. In other words, these parameters measure different levels of smoothness of f (x): locally when kx ? x? k is small, ? measures how well f (x) can be approximated by a constant function, ? measures how well f (x) can be approximated by a linear function in x, and ? measures how well f (x) can be approximated by a quadratic function in x. For local constant approximation, the error term ?kx?x? k is the first order in kx?x? k; for local linear approximation, the error term ?kx?x? k2 is the second order in kx ? x? k; for local quadratic approximation, the error term ?kx ? x? k3 is the third order in kx ? x? k. That is, if f (x) is smooth with relatively small ?, ?, ?, the error term becomes smaller (locally when kx ? x? k is small) if we use a higher order approximation. 2 Similar to the single-layer coordinate coding in [17], here we define a two-layer coordinate coding as the following. Definition 2.2 (Coordinate Coding) A single-layer coordinate coding is a pair (? 1 , C 1 ), where C 1 ? Rd is a set of anchor points (aka basis functions), and ? is a map of x ? Rd to [?v1 (x)]v?C 1 ? P 1 R|C | such that v?C 1 ?v1 (x) = 1. It induces the following physical approximation of x in Rd : X ?v1 (x)v. h? 1 ,C 1 (x) = v?C 1 A two-layer coordinate coding (?, C) consists of coordinate coding systems {(? 1 , C 1 )} ? {(? 2,v , C 2,v ) : v ? C 1 }. The pair (? 1 , C 1 ) is the first layer coordinate coding, (? 2,v , C 2,v ) are second layer coordinate-coding pairs that refine the first layer coding for every first-layer anchor point v ? C 1 . The performance of LCC is characterized in [17] using the following nonlinear function approximation result. Lemma 2.1 (Single-layer LCC Nonlinear Function Approximation) Let (? 1 , C 1 ) be an arbitrary single-layer coordinate coding scheme on Rd . Let f be an (?, ?, ?)-Lipschitz smooth function. We have for all x ? Rd : X X 1 wv ?v (x) ? ? x ? h? 1 ,C 1 (x) + ? |?v1 (x)|kv ? xk2 , (1) f (x) ? 1 1 v?C v?C where wv = f (v) for v ? C 1 . This result shows that a high dimensional nonlinear function can be globally approximated by a linear function with respect to the single-layer coding [?v1 (x)], with unknown linear coefficients [wv ]v?C 1 = [f (v)]v?C 1 , where the approximation on the right hand size is second order. This bounds directly suggests the following learning method: for each x, we use its coding [?v1 (x)] ? P 1 R|C | as features. We then learn a linear function of the form v wv ?v1 (x) using a standard linear learning method such as SVM, where [wv ] is the unknown coefficient vector to be learned. The optimal coding can be learned using unlabeled data by optimizing the right hand side of (1) over unlabeled data. In the same spirit, we can extend the above result on LCC by including additional layers. This leads to the following bound. Lemma 2.2 (Two-layer LCC Nonlinear Function Approximation) Let (?, C) = {(? 1 , C 1 )} ? {(? 2,v , C 2,v ) : v ? C 1 } be an arbitrary two-layer coordinate coding on Rd . Let f be an (?, ?, ?)Lipschitz smooth function. We have for all x ? Rd : X X X kf (x) ? wv ?v1 (x) ? ?v1 (x) wv,u ?u2,v (x)k v?C 1 v?C 1 ?0.5?kx ? h? 1 ,C 1 (x)k + 0.5? X u?C 2,v |?v1 (x)|kx ? h? 2,v ,C 2,v (x)k + ? v?C 1 X |?v1 (x)|kx ? vk3 , (2) v?C 1 where wv = f (v) for v ? C 1 and wv,u = 0.5?f (v)? (u ? v) for u ? C 2,v , and X X kf (x) ? ?v1 (x) wv,u ?u2,v (x)k v?C 1 ?? X |?v1 (x)|kx u?C 2,v ? h? 2,v ,C 2,v (x)k + ? v?C 1 +? X v?C 1 X |?v1 (x)|kx ? h? 2,v ,C 2,v (x)k2 v?C 1 |?v1 (x)| X |?u2,v (x)|ku u?C 2,v where wv,u = f (u) for u ? C 2,v . 3 ? h? 2,v ,C 2,v (x)k2 , (3) Similar to the interpretation of Lemma 2.1, bounds in Lemma 2.2 implies that we can approximate a nonlinear function f (x) with linear function of the form X X X wv ?v1 (x) + wv,u ?v1 (x)?u2,v (x), v?C 1 v?C 1 u?C 2,v where [wv ] and [wv,u ] are the unknown linear coefficients to be learned, and [?v1 (x)]v?C 1 and [?u2,v (x)]v?C 1 ,u?C 2,v form the feature vector. The coding can be learned from unlabeled data by minimizing the right hand side of (2) or (3). Compare with the single-layer coding, we note that the second term on the right hand side of (1) is replaced by the third term on the right hand side of (2). That is, the linear approximation power of the single-layer coding scheme (with a quadratic error term) becomes quadratic approximation power of the two-layer coding scheme (with a cubic error term). The first term on the right hand side of (1) is replaced by the first two terms on the right hand of (2). If the manifold is relatively flat, then the error terms kx ? h? 1 ,C 1 (x)k and kx ? h? 2,v ,C 2,v (x)k will be relatively small in comparison to the second term on the right hand side of (1). In such case the two-layer coding scheme can potentially improve the single-layer system significantly. This result is similar to that of [16], where the second layer uses local PCA instead of another layer of nonlinear coding. However, the bound in Lemma 2.2 is more refined and specifically applicable to nonlinear coding. The bound in (2) shows the potential of the two-layer coding scheme in achieving higher order approximation power than single layer coding. Higher order approximation gives meaningful improvement when each |C 2,v | is relatively small compared to |C 1 |. On the other hand, if |C 1 | is small but each |C 2,v | is relatively large, then achieving higher order approximation does not lead to meaningful improvement. In such case, the bound in (3) shows that the performance of the two-level coding is still comparable to that of one-level coding scheme in (1). This is the situation where the 1st layer is mainly used to partition the space (while its approximation accuracy is not important), while the main approximation power is achieved with the second layer. The main advantage of two-layer coding in this case is to save computation. This is because instead of solving a single layer coding system with many parameters, we can solve many smaller coding systems, each with a small number of parameters. This is the situation when including nonlinearity in the second layer becomes useful, which means that the deep-coding network approach in this paper has some advantage over [16] which can only approximate linear function with local PCA in the second layer. 3 Deep Coding Network We shall discuss the computational algorithm motivated by Lemma 2.2. While the two bounds (2) and (3) consider different scenarios depending on the relative size of the first layer versus the second layer, in reality it is difficult to differentiate and usually both bounds play a role at the same time. Therefore we have to consider a mixed effect. Instead of minimizing one bound versus another, we shall use them to motivate our algorithm, and design a method that accommodate the underlying intuition reflected by the two bounds. 3.1 Two Layer Formulation In the following, we let C 1 = {v1 , . . . , vL1 }, ?v1j (Xi ) = ?ji , C 2,vj = {vj,1 , . . . , vj,L2 }, and 2,v i ?vj,kj (Xi ) = ?j,k , where L1 is the size of the first-layer codebook, and L2 is the size of each individual codebook at the second layer. We take a layer-by-layer approach for training, where the second layer is regarded as a refinement of the first layer, which is consistent with Lemma 2.2. In the first layer, we learn a simple sparse coding model with all data: ? ? 2 ?? L1 n X X ? ?1 i ?? [? 1 , C 1 ] = arg min ? X ? ? v ? i j ?? j ?,v 2 i=1 j=1 2 X subject to ?ji ? 0, ?ji = 1, kvj k ? ?, (4) j where ? is some constant, e.g., if all Xi are normalized to have unit length, ? can be set to be 1. For convenience, we not only enforce sum-to-one-constraint on the sparse coefficients, but also 4 P P impose nonnegative constraints so that j |?ji | = j ?ji = 1 for all i. This presents a probability interpretation of the data, and allow us to approximate the following term on the right hand sides of (2) and (3): ? 2 ?1/2 L2 L2 X X X X i i ?ji Xi ? ?j,k vj,k ? ? ?ji Xi ? ?j,k vj,k ? . j j k=1 k=1 Note that neither sum to one or 1-norm regularization of coefficients is needed in the derivation of (2), while such constraints are needed in (3). This means additional constraints may hurt performance in the case of (2) although it may help in the case of (3). Since we don?t know which case is the dominant effect, as a compromise we remove the sum-to-one constraint but put in 1-norm regularization which is tunable. We still keep the positivity constraint for interpretability. This leads to the following formulation for the second layer: ? ? ?? 2 L2 L2 n X X X 1 i i ?? [? 2,vj , C 2,vj ] = arg min ? ?ji ? Xi ? ?j,k vj,k + ?2 ?j,k ?,v 2 i=1 subject to i ?j,k k=1 2 k=1 ? 0, kvj,k k ? 1, (5) where ?2 is a l1 -norm sparsity regularization parameter controlling the sparseness of solutions.   With i i i the codings on both layers, the sparse representation of Xi is s?ji , ?ji [?j,1 , ?j,2 , ..., ?j,L ] j=1,...L 2 1 where s is a scaling factor balances the coding from the two different layers. 3.2 Multi-layer Extension The two-level coding scheme can be easily extended to the third and higher layers. For example, at the third layer, for each base vj,k , the third-layer coding is to solve the following weighted optimization: ? ? ?? 2 L3 n X X X i ?1 i i ?? [?3j,k , C3j,k ] = arg min ? ?j,k ?j,k,l ?j,k,l vj,k,l + ?3 Xi ? ?,v 2 i=1 subject to i ?j,k,l l=1 ? 0, kvj,k,l k ? 1. 2 l (6) 3.3 Optimization The optimization problems in Equations (4) to (6) can be generally solved by alternating the following two steps: 1) given current cookbook estimation v, compute the optimal sparse coefficients ?; 2) given the new estimates of the sparse coefficients, optimize the cookbooks. Step 1 requires solving an independent optimization problem for each data sample, and it can be computationally very expensive when there are many training examples. In such case, computational efficiency becomes an important issue. We developed some efficient algorithms for solving the optimizations problem in Step 1 by exploiting the fact that the solutions of the optimization problems are sparse. The optimization problem in Step 1 of (4) can be posed as a nonnegative quadratic programming problem with a single sum-to-one equality constraint. We employ an active set method for this problem that easily handles the constraints [4]. Most importantly, since the optimal solutions are very sparse, the active set method often gives the exact solution after a few dozen of iterations. The optimization problem in (5) contains only nonnegative constraints (but not the sum-to-one constraint), for which we employ a pathwise projected Newton (PPN) method [3] that optimizes a block of coordinates per iteration instead of one coordinate at a time in the active set method. As a result, in typical sparse coding settings (for example, in the experiments that we will present shortly in Section 4), the PPN method is able to give the exact solution of a median size (e.g. 2048 dimension) nonnegative quadratic programming problem in milliseconds. Step 2 can be solved in its dual form, which is convex optimization with nonnegative constraints [9]. Since the dual problem contains only nonnegative constraints, we can still employ projected Newton method. It is known that the projected Newton method has superlinear convergence rate under 5 fairly mild conditions [3]. The computational cost in Step 2 is often negligible compared to the computational cost in Step 1 when the cookbook size is no more than a few thousand. A significant advantage of the second layer optimization in our proposal is parallelization. As shown in (5), the second-layer sparse coding is decomposed into L1 independent coding problems, and thus can be naturally parallelized. In our implementation, this is done through Hadoop. 4 Experiments 4.1 MNIST dataset We first demonstrate the effectiveness of the proposed deep coding scheme on the popular MNIST benchmark data [1]. MNIST dataset consists of 60,000 training digits and 10,000 testing digits. In our experiments of deep coding network, the entire training set is used to learn the first-layer coding, with codebook of size 64. For each of the 64 bases in the first layer, a second-layer codebook was learned ? the deep coding scheme presented in the paper ensures that the codebook learning can be done independently. We implemented a Hadoop parallel program that solved the 64 codebook learning tasks in about an hour ? which would have taken 64 hours on single machine. This shows that easy parallelization is a very attractive aspect of the proposed deep coding scheme, especially for large scale problems. Table 1 shows the performance of deep coding network on MNIST compared to some previous coding schemes. There are a number of interesting observations in these results. First, adding an extra layer yields significant improvement on classification; e.g. for L1 = 512, the classification error rate for single layer LCC is 2.60% [17] while extended LCC achieves 1.98% [16] (the extended LCC method in [16] may also be regarded as a two layer method but the second layer is linear); the two-layer coding scheme here significantly improves the performance with classification error rate of 1.51% . Second, the two-layer coding is less prone to overfitting than its single-layer counterpart. In fact, for the single-layer coding, our experiment shows that further increasing the codebook size will cause overfitting (e.g., with L1 = 8192, the classification error deteriorates to 1.78%). In contrast, the performance of two-layer coding still improves when the second-layer codebook is as large as 512 (and the total codebook size is 64 ? 512 = 32768, which is very high-dimensional considering the total number of training data is only 60,000). This property is desirable especially when high-dimensional representation is preferred in the case of using sparse coding plus linear classifier for classifications. Figure 1 shows some first-layer bases and their associated second-layer bases. We can see that the second-layer bases provide deeper details that helps to further explain their first layer parent basis; on the other hand, the parent first-layer basis provides an informative context for its child secondlayer bases. For example, in the seventh row in Fig. 1 where the first-layer basis is like Digit 7, this basis can come from Digit 7, Digit 9 or even Digit 4. Then, its second-layer bases help to further explain the meaning of the first-layer basis: in its associated second-layer bases, the first two bases in that row are parts of Digit 9 while the last basis in that row is a part of Digit ?4?. Meanwhile, the first-layer 7-like basis provides important context for its second-layer part-like bases ? without the first-layer basis, the fragmented parts (like the first two second-layer bases in that row) may not be very informative. The zoomed-in details contained in deeper bases significantly help a classifier to resolve difficult examples, and interestingly, coarser details provide useful context for finer details. Single layer sparse coding Number of bases (L1 ) 512 1024 Local coordinate coding 2.60 2.17 Extended LCC 1.95 1.82 Two-layer sparse coding Number of bases (L2 ) 64 128 L1 = 64 1.85 1.69 2048 1.79 1.78 256 1.53 4096 1.75 1.64 512 1.51 Table 1: The classification error rate (in %) on MNIST dataset with different sparse coding schemes. 6 First?layer bases Second?layer bases Figure 1: Example of bases from a two-layer coding network on MNIST data. For each row, the first image is a first-layer basis, and the remaining images are its associated second-layer bases. The colorbar is the same for all images, but the range it represents differs from image to image ? generally, the color of the background of a image represent zero value, and the colors above and below that color respectively represent positive and negative values. 4.2 PASCAL 2007 The PASCAL 2007 dataset [6] consists of 20 categories of images such as airplanes, persons, cats, tables, and so on. It consists of 2501 training images and 2510 validation images, and the task is to classify an image into one or more of the 20 categories. Therefore, this task can be casted as training 20 binary classifiers. The critical issue is how to extract effective visual features from the images. Among different methods, one particularly effective approach is to use sparse coding to derive a codebook of low-level features (such as SIFT) and represent an image as a bag of visual words [15]. Here, we intend to learn two-layer hierarchical codebooks instead of single flat codebook for the bag-of-word image representation. In our experiments, we first sampled dense SIFT descriptors (each is represented by a 128?1 vector) on each image using four scales, 7 ? 7, 16 ? 16, 25 ? 25 and 31 ? 31 with stepsize of 4. Then, the SIFT descriptors from all images (both training and validation images) were utilized to learn first-layer codebooks with different dimensions, L1 = 512, 1024 and 2048. Then, given a firstlayer codebook, for each basis in the codebook, we learned its second-layer codebook of size 64 by solving the weighted optimization in (5). Again, the second-layer codebook learning was done in parallel using Hadoop. With the first-layer and second-layer codebooks, each SIFT feature was coded into a very high dimensional space: using L1 = 1024 as an example, the coding dimension 7 Dimension of the first layer (L1 ) Single-layer sparse coding Two-layer sparse coding (L2 =64) 512 42.7 51.1 1024 45.3 52.8 2048 48.4 53.3 Table 2: Average precision (in %) of classification on PASCAL07 dataset using different sparse coding schemes. in total is 1024 + 1024 ? 64 = 66, 560. For each image, we employed 1 ? 1, 2 ? 2 and 1 ? 3 spatial pyramid matching with max-pooling. Therefore in the end, each image is represented by a 532, 480(= 66, 560?8)?1 high-dimensional vector for L1 = 1024. Table 2 shows the classification results. It is clear that the two-layer sparse coding performs significantly better than its single-layer counterpart. We would like to point out that, although we simply employed max-pooling in the experiments, it may not be the best pooling strategy for the hierarchical coding scheme presented in this paper. We believe a better pooling scheme needs to take the hierarchical structure into account, but this remains as an open problem and is one of our future work. 5 Conclusion This paper proposes a principled extension of the traditional single-layer flat sparse coding scheme, where a two-layer coding scheme is derived based on theoretical analysis of nonlinear functional approximation that extends recent results for local coordinate coding. The two-layer approach can be easily generalized to deeper structures in a hierarchical multiple-layer manner. There are two main advantages of multi-layer coding: it can potentially achieve better performance because the deeper layers provide more details and structures; it is computationally more efficient because coding are decomposed into smaller problems. Experiment showed that the performance of two-layer coding can significantly improve that of single-layer coding. For the future directions, it will be interesting to explore the deep coding network with more than two layers. The formulation proposed in this paper grants a straightforward extension from two layers to multiple layers. For small datasets like MNIST, the two-layer scheme seems to be already very powerful. However, for more complicated data, deeper coding with multiple layers may be an effective way for gaining finer and finer features. For example, the first layer coding picks up some large categories such as human, bikes, cups, and so on; then for the human category, the secondlayer coding may find difference among adult, teenager, and senior person; and then the third layer may find even finer features such as race feature at different ages. References [1] http://yann.lecun.com/exdb/mnist/. [2] Samy Bengio, Fernando Pereira, Yoram Singer, and Dennis Strelow. Group sparse coding. In NIPS? 09, 2009. [3] D P. Bertsekas. Projected newton methods for optimization problems with simple constraints. SIAM J. Control Optim., 20(2):221?246, 1982. [4] Dimitri P. Bertsekas. Nonlinear programming. Athena Scientific, 2003. [5] David Bradley and J. Andrew (Drew) Bagnell. Differentiable sparse coding. In Proceedings of Neural Information Processing Systems 22, December 2008. [6] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html. [7] Mark Everingham. Overview and results of the classification challenge. The PASCAL Visual Object Classes Challenge Workshop at ICCV, 2009. [8] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504 ? 507, July 2006. 8 [9] Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms. In Proceedings of the Neural Information Processing Systems (NIPS) 19, 2007. [10] Michael S. Lewicki and Terrence J. Sejnowski. Learning overcomplete representations. Neural Computation, 12:337?365, 2000. [11] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised dictionary learning. In NIPS? 08, 2008. [12] B.A. Olshausen and D.J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for nature images. Nature, 381:607?609, 1996. [13] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. Self-taught learning: Transfer learning from unlabeled data. International Conference on Machine Learning, 2007. [14] Marc Aurelio Ranzato, Y-Lan Boureau, and Yann LeCun. Sparse feature learning for deep belief networks. In NIPS? 07, 2007. [15] Jianchao Yang, Kai Yu, Yihong Gong, and Thomas Huang. Linear spatial pyramid matching using sparse coding for image classification. In IEEE Conference on Computer Vision and Pattern Recognition, 2009. [16] Kai Yu and Tong Zhang. Improved local coordinate coding using local tangents. In ICML? 09, 2010. [17] Kai Yu, Tong Zhang, and Yihong Gong. Nonlinear learning using local coordinate coding. In NIPS? 09, 2009. 9
3929 |@word mild:1 norm:6 seems:1 everingham:2 open:1 pick:1 accommodate:1 contains:2 interestingly:1 bradley:1 current:1 com:1 optim:1 attracted:1 partition:1 informative:2 remove:1 provides:2 codebook:15 location:1 org:1 simpler:1 zhang:3 consists:4 fitting:3 introduce:1 manner:2 theoretically:1 multi:6 salakhutdinov:1 globally:2 decomposed:2 voc:1 resolve:1 curse:2 considering:1 increasing:1 becomes:4 notation:1 underlying:4 moreover:1 bike:1 developed:1 nj:1 guarantee:1 sapiro:1 every:1 classifier:5 k2:3 control:1 unit:1 grant:1 bertsekas:2 positive:1 negligible:1 local:17 colorbar:1 plus:2 shenghuo:1 suggests:1 range:1 lecun:2 testing:1 block:1 differs:1 digit:8 empirical:1 significantly:6 matching:2 word:3 convenience:1 unlabeled:4 superlinear:1 strelow:1 put:1 context:3 optimize:1 www:1 map:1 demonstrated:1 straightforward:1 attention:1 williams:1 independently:1 convex:1 regarded:2 importantly:1 handle:1 coordinate:22 hurt:1 controlling:1 play:1 exact:2 programming:3 us:1 samy:1 alexis:2 approximated:5 expensive:1 particularly:1 utilized:1 recognition:1 coarser:1 observed:1 role:1 solved:4 thousand:1 ensures:1 ranzato:1 counter:1 principled:3 intuition:2 benjamin:1 rigorously:1 motivate:1 solving:4 compromise:1 efficiency:1 basis:18 easily:4 cat:1 america:1 represented:2 derivation:1 x2d:1 fast:1 effective:6 describe:1 sejnowski:1 refined:2 kai:4 solve:2 posed:1 statistic:1 emergence:1 differentiate:1 advantage:8 differentiable:1 propose:1 zoomed:1 alleviates:1 achieve:2 intuitive:1 description:1 kv:1 exploiting:1 convergence:1 parent:2 produce:1 object:2 help:4 depending:1 derive:1 andrew:3 gong:2 implemented:1 implies:1 come:1 direction:2 human:2 lcc:14 alleviate:1 extension:8 k3:2 achieves:1 dictionary:1 xk2:2 estimation:1 applicable:1 bag:2 weighted:2 derived:3 ponce:1 improvement:3 indicates:1 mainly:1 aka:1 contrast:1 entire:1 issue:3 classification:12 arg:3 pascal:5 dual:2 among:2 html:1 proposes:2 spatial:2 fairly:2 field:2 ng:2 represents:2 yu:4 look:1 vl1:1 cookbook:3 icml:1 future:2 employ:3 few:2 packer:1 zoom:2 individual:1 replaced:2 possibility:1 euclidean:1 overcomplete:1 theoretical:7 classify:1 cost:2 successful:1 seventh:1 characterize:1 st:1 person:2 international:1 siam:1 lee:2 terrence:1 michael:1 kvj:3 again:1 huang:1 positivity:1 derivative:2 dimitri:1 account:1 potential:1 coding:106 coefficient:9 race:1 view:1 parallel:3 complicated:1 accuracy:1 descriptor:2 yield:4 finer:5 explain:2 definition:2 naturally:2 associated:3 sampled:1 tunable:1 dataset:5 popular:1 color:3 dimensionality:4 improves:2 higher:6 supervised:1 reflected:2 zisserman:2 improved:3 formulation:3 done:4 hand:11 dennis:1 nonlinear:21 scientific:1 believe:1 olshausen:1 effect:3 pascalnetwork:1 normalized:1 counterpart:2 hence:1 regularization:3 equality:1 alternating:1 firstlayer:1 laboratory:1 nonzero:1 attractive:1 self:1 generalized:2 exdb:1 demonstrate:1 performs:1 l1:12 image:21 meaning:1 recently:1 functional:2 empirically:2 physical:1 ji:10 overview:1 cupertino:1 extend:1 interpretation:2 significant:3 cup:1 honglak:2 smoothness:3 rd:10 dbn:1 nonlinearity:1 l3:1 base:18 dominant:1 recent:4 showed:2 optimizing:1 optimizes:1 scenario:1 wv:15 success:2 binary:1 additional:2 impose:1 employed:2 parallelized:1 fernando:1 july:1 multiple:4 desirable:1 sound:1 smooth:4 characterized:1 bach:1 lin:1 coded:1 vision:1 rutgers:1 iteration:2 represent:3 pyramid:2 achieved:1 cell:1 justified:1 background:1 proposal:2 separately:1 winn:1 median:1 parallelization:2 eliminates:1 extra:1 subject:3 pooling:4 december:1 spirit:1 effectiveness:1 call:1 yang:1 bengio:1 easy:1 codebooks:4 idea:2 airplane:1 teenager:1 yihong:2 jianchao:1 motivated:1 pca:2 casted:1 suffer:1 hessian:3 cause:1 deep:13 useful:2 generally:2 clear:1 locally:4 induces:1 category:4 http:2 millisecond:1 deteriorates:1 per:1 shall:2 taught:1 group:1 four:1 lan:1 achieving:2 secondlayer:2 neither:1 v1:19 year:1 sum:5 powerful:1 extends:2 yann:2 draw:1 scaling:1 comparable:1 layer:124 bound:10 quadratic:8 refine:1 nonnegative:6 constraint:14 flat:7 aspect:1 min:3 relatively:7 piscataway:1 popularized:1 battle:2 smaller:3 modification:3 iccv:1 taken:1 computationally:3 equation:1 remains:1 turn:1 discus:1 needed:3 know:1 singer:1 end:1 hierarchical:8 enforce:1 stepsize:1 save:1 shortly:1 thomas:1 remaining:1 x21:1 newton:4 yoram:1 especially:2 intend:1 added:2 already:1 strategy:1 receptive:1 traditional:6 bagnell:1 athena:1 manifold:6 yuanqing:1 length:1 code:1 index:1 minimizing:2 balance:1 difficult:3 potentially:2 negative:1 design:1 implementation:1 unknown:3 ppn:2 observation:2 datasets:2 benchmark:2 finite:3 situation:2 extended:5 hinton:1 arbitrary:2 david:1 pair:3 connection:1 learned:6 hour:2 nip:5 address:1 able:1 adult:1 usually:1 below:1 pattern:1 sparsity:1 challenge:4 program:1 including:2 interpretability:1 max:2 belief:2 gaining:1 power:6 critical:1 gool:1 natural:1 rely:1 regularized:1 raina:2 zhu:1 scheme:25 improve:2 voc2007:2 extract:1 kj:1 review:1 understanding:1 geometric:2 tangent:2 kf:2 l2:8 relative:1 mixed:1 interesting:2 versus:2 localized:1 age:1 validation:2 consistent:1 row:5 prone:1 last:1 side:7 allow:1 deeper:7 senior:1 sparse:36 van:1 fragmented:1 dimension:8 adaptive:1 refinement:1 projected:4 approximate:3 preferred:1 keep:1 overfitting:3 active:3 anchor:3 mairal:1 xi:8 don:1 reality:1 table:5 learn:7 ku:1 nature:2 ca:1 transfer:1 hadoop:4 meanwhile:1 marc:1 vj:11 main:6 dense:1 aurelio:1 big:1 child:1 fig:1 cubic:1 tong:3 precision:1 pereira:1 crude:1 kxk2:1 third:6 learns:1 dozen:1 embed:1 specific:1 sift:4 svm:1 workshop:2 mnist:8 adding:1 effectively:2 drew:1 nec:1 linearization:1 sparseness:1 kx:17 boureau:1 locality:1 simply:1 explore:1 forming:1 visual:4 expressed:1 kxk:1 contained:1 pathwise:1 lewicki:1 u2:5 v1j:1 lipschitz:9 specifically:3 typical:1 reducing:1 lemma:7 called:4 total:3 meaningful:2 formally:1 mark:1 rajat:2
3,234
393
Reinforcenlent Learning in Markovian and Non-Markovian Environments Jiirgen Schmidhuber Institut fiir Informatik Technische Universitat Miinchen Arcistr. 21, 8000 Miinchen 2, Germany [email protected] Abstract This work addresses three problems with reinforcement learning and adaptive neuro-control: 1. Non-Markovian interfaces between learner and environment. 2. On-line learning based on system realization. 3. Vectorvalued adaptive critics. An algorithm is described which is based on system realization and on two interacting fully recurrent continually running networks which may learn in parallel. Problems with parallel learning are attacked by 'adaptive randomness'. It is also described how interacting model/controller systems can be combined with vector-valued 'adaptive critics' (previous critics have been scalar). 1 INTRODUCTION At a given time, an agent with a non-Markovian interface to its environment cannot derive an optimal next action by considering its current input only. The algorithm described below differs from previous reinforcement algorithms in at least some of the following issues: It has a potential for on-line learning and non-Markovian environments, it is local in time and in principle it allows arbitrary time lags between actions and ulterior consequences; it does not care for something like episodeboundaries, it allows vector-valued reinforcement, it is based on two interacting fully recurrent continually running networks, and it tries to construct a full environmental model- thus providing complete 'credit assignment paths' into the past. We dedicate one or more conventional input units (called pain and pleasure units) for the purpose of reporting the actual reinforcement to a fully recurrent control network. Pain and pleasure input units have time-invariant desired values. 500 Reinforcement Learning in Markovian and Non-Markovian Environments We employ the lID-Algorithm (Robinson and Fallside, 19S7) for training a fully recurrent model network to model the relationships between environmental inputs, output actions of an agent, and corresponding pain or pleasure. The model network (e.g. (Werbos, 19S7)(Jordan, 19S5)(Robinson and Fallside, 19S9)) in turn allows the system to compute controller gradients for 'minimizing pain' and 'maximizing pleasure'. Since reinforcement gradients depend on 'credit assignment paths' leading 'backwards through the environment " the model network should not only predict the pain and pleasure units but also the other input units. The quantity to be minimized by the model network is Et i(Yi(t) - Yipred(t))2, where Yi(t) is the activation of the ith input unit at time t,' and Yipred(t) is the model's prediction of the activation of the ith input unit at time t. The quantity to be minimized by the controller is Et j(Ci - ri(t))2, where ri(t) is the activation of the ith pain or pleasure input unit at time t and Cj is its desired activation for all times. t ranges over all (discrete) time steps. Weights are changed at each time step. This relieves dependence on 'episode boundaries'. Here the assumption is that the learning rates are small enough to avoid instabilities (Williams and Zipser, 19S9). There are two versions of the algorithm: the sequential version and the parallel version. With the sequential version, the model network is first trained by providing it with randomly chosen examples of sequences of interactions between controller and environment. Then the model's weights are fixed to their current values, and the controller begins to learn. With the parallel version both the controller and the model learn concurrently. One advantage of the parallel version is that the model network focusses only on those parts of the environmental dynamics with which the controller typically is confronted. Another advantage is the applicability to changing environments. Some disadvantages of the parallel version are listed next. 1. Imperfect model networks. The model which is used to compute gradient information for the controller may be wrong. However, if we assume that the model network always finds a zero-point of its error function, then over time we can expect the control network to perform gradient descent according to a perfect model of the visible parts of the real world. 1.A: The assumption that the model network can always find a zero-point of its error function is not valid in the general case. One of the reasons is the old problem of local minima, for which this paper does not suggest any solutions. 1.B: (Jordan, 19S5) notes that a model network does not need to be perfect to allow increasing performance of the control network. 2. Instabilities. One source of instability could arise if the model network 'forgets' information about the environmental dynamics because the activities of the controller push it into a new sub-domain, such that the weights responsible for the old well-modeled sub-domain become over-written. 3. Deadlock. Even if the model's predictions are perfect for all actions executed by the controller, this does not imply that the algorithm will always behave as desired. Let us assume that the controller enters a local minimum relative to the current state of an imperfect model network. This relative minimum might cause the controller to execute the same action again and again (in a certain spatio-temporal context), while the model does not get a chance to learn something about the consequences of alternative actions (this is the deadlock). 501 502 Schmidhuber The sequential version lacks the flavor of on-line learning and is bound to fail as soon as the environment changes significantly. We will introduce 'adaptive randomness' for the controller outputs to attack problems of the parallel version. 2 THE ALGORITHM The sequential version of the algorithm can be obtained in a straight-forward manner from the description of the parallel version below. At every time step, the parallel version is performing essentially the same operations: In step 1 of the main loop of the algorithm, actions to be performed in the external world are computed. These actions are based on both current and previous inputs and outputs. For all new activations, the corresponding derivatives with respect to all controller weights are updated. In step 2 actions are executed in the external world, and the effects of the current action and/or previous actions may become visible. In step 3 the model network sees the last input and the current output of the controller at the same time. The model network tries to predict the new input without seeing it. Again the relevant gradient information is computed. In step 4 the model network is updated in order to better predict the input (including pleasure and pain) for the controller. The weights of the control network are updated in order to minimize the cumulative differences between desired and actual activations of the pain and pleasure units. 'Teacher forcing' (Williams and Zipser, 1989) is used in the model network (although there is no teacher besides the environment). The partial derivatives of the controller's inputs with respect to the controller's weights are approximated by the partial derivatives of the corresponding predictions generated by the model network. Notation (the reader may find it convenient to compare with (Williams and Zipser, 1989)): G is the set of all non-input units of the control network, A is the set of its output units, [ is the set of its 'normal' input units, P is the set of its pain and pleasure units, M is the set of all units of the model network, 0 is the set of its output units, Ope 0 is the set of all units that predict pain or pleasure, W M is the set of variables for the weights of the model network, We is the set of variables for the weights of the control network, Yk" ... is the variable for the updated activation of the kth unit from MuG u [ UP, Yk o l4 is the variable for the last value of Yk" ... , Wij is the variable for the weight of the directed connection from unit j to unit i. Oik is the Kronecker-delta, which is 1 for i k and 0 otherwise, P~j" ... is the variable = which gives the current (approximated) value of 8~~~:w , P~jol4 is the variable which gives the last value of prj .... If k E P then Ck is k's desired activation for all times, if k E [U P, then kpreJ is the unit from 0 which predicts k. Otc is the learning rate for the control network, OtM is the learning rate for the model network. I [UP 1=1 0 I, lOp 1=1 P I. Each unit in [UPUA has one forward connection to each unit in MUG, each unit in M is connected to each other unit in M, each unit in G is connected to each other unit in G. Each weight variable of a connection leading to a unit in M is said to belong to W M, each weight variable of a connection leading to a unit in G is said to belong to We. For each weight Wij E WM there are ~~rvalues for all k EM, for each weight Wij E We there are p~rvalues for all k EMU G U [ UP. The parallel version of the algorithm works as follows: Reinforcement Learning in Markovian and Non-Markovian Environments INITIALIZATION: V Wij E WM U We: Wij - random , V possible k: pfjo' " - o,pt?? ", - 0 . V k E MuG: Ykol" - O,Yk".", - O. V k E I UP: Set Ykol" according to the current environment, Yk".w - O. UNTIL TERMINATION CRITERION IS REACHED: 1. ViE G : Yi".", - 1+e - 2: 1j "ijlljol" . V Wij E We,k E G: pfj".'" - Yk" ... (l- Yk" .... )(2:,wk'pijo, ,, +bil~Yjo''')' V k f: G: Ykol" - Yk ...... , V Wij E We : pfjo' " - pfj" .... 2. Execute all actions based on activations of units in A. Update the environment. ViE I UP: Set Yi".", according to environment. S. ViE M : Yi ... ", - 1+e - 2:1j "'ijlljol" . V Wij E WM U We, k EM: pfj" .... - Yk".w(1- Yk" ??')(2:, Wk'P~;ol" + bikY;Old)' V k EM: Ykol" - Yk".wl V Wi; EWe U WM : pf;o'" - pf;".VI . 4? V Wi; E WM: + O:M 2:kElUP(Yk"... - YkPredol,,)p:::,;d. Wi; + O:e LkEP(Ck - Yk ..... )p:::,;d. Wij - Wi; V Wi; E We: Wi; - V k E I UP: Ykol" - Yk" .... , Ykpredol" - Yk" .... , V Wi; E WM : p:::,;d - 0, V Wij E We : pfjo,,, - p:::,;d . The algorithm is local in time, but not in space. The computation complexity per time step is O( I W M U We II M II M U I U P U A I + I We II G II I U PUG I). In what follows we describe some useful extensions of the scheme. 1. More network ticks than environmental ticks. For highly 'non-linear' environments the algorithm has to be modified in a trivial manner such that the involved networks perform more than one (but not more than three) iterations of step 1 and step 3 at each time step. (4-layer-operations in principle can produce an arbitrary approximation of any desired mapping.) 2. Adaptive randomness. Explicit explorative random search capabilities can be introduced by probabilistic controller outputs and 'gradient descent through random number generators' (Williams, 1988). We adjust both the mean and the variance of the controller actions. In the context of the lID algorithm, this works as follows: A probabilistic output unit k consists of a conventional unit kJ-l which acts as a mean generator and a conventional unit ku which acts as a variance generator. At a given time, the probabilistic output Yk" .... is computed by Yk"ew = YklJ ... w +zYkIT".w' where Z is distributed e.g. according to the normal distribution. The corresponding pf;new 503 504 Schmidhu ber must then be updated according to the following rule: ~. +-~!' P,) new P,} new + Yk newY - Yk/J"ew ~~ P,) new' ko-new A more sophisticated strategy to improve the model network is to introduce 'adaptive curiosity and boredom '. The priniciple of adaptive curiosity for model-building neural controllers (Schmidhuber, 1990a) says: Spend additional reinforcement whenever there is a mismatch between the expectations of the model network and reality. 3. Perfect models. Sometimes one can gain a 'perfect' model by constructing an appropriate mathematical description of the environmental dynamics. This saves the time needed to train the model. However, additional external knowledge is required. For instance, the description of the environment might be in form of differential or difference equations. In the context of the algorithm above, this means introducing new Pii variables for each Wij E We and each relevant state variable 1](t) of the dynamical environment. The new variables serve to accumulate the values of ~71(t). This can be done in exactly the same cumulative manner as VW' j with the activations of the model network above. 4. Augmenting the algorithm by TD-methods. The following ideas are not limited to recurrent nets, but are also relevant for feed-forward controllers in Markovian environments. It is possible to augment model-building algorithms with an 'adaptive critic' method. To simplify the discussion, let us assume that there are no pleasure units, just pain units. The algorithm's goal is to minimize cumulative pain. We introduce the TD-principle (Sutton, 1988) by changing the error function of the units in Op: At a given time t, the contribution of each unit kpred E Op to the model network's error is Ykpred(t) - 'YYkpred(t + 1) - Yk(t+ 1), where Yi(t) is the activation of unit i at time t, and 0 < 'Y < 1 is a discount factor for avoiding predictions of infinite sums. Thus Op is trained to predict the sum of all (discounted) future pain vectors and becomes a vector-valued adaptive critic. (This affects the first V-loop in step 4 .) The controller's goal is to minimize the absolute value of M's pain predictions. Thus, the contribution of time t to the error function of the controller now becomes EkpredEOp (Ykpred(t)? This affects the second For-loop in step 4 of the algorithm. Note that it is not a state which is evaluated by the adaptive critic component, but a combination of a state and an action. This makes the approach similar to (Jordan and Jacobs, 1990) . (Schmidhuber, 1990a) shows how a recurrent model/controller combination can be used for look-ahead planning without using TD-methods. 3 ., EXPERIMENTS The following experiments were conducted by the TUM-students Josef Hochreiter and Klaus Bergner. See (Schmidhuber, 1990a) and (Schmidhuber, 1990b) for the full details . 1. Evolution of a flip-flop by reinforcement learning. A controller J( had to learn to behave like a flip-flop as described in (Williams and Zipser, 1989) . The main diffi- Reinforcement Learning in Markovian and Non-Markovian Environments culty (the one which makes this different from the supervised approach as described in (Williams and Zipser, 1989? was that there was no teacher for K's (probabilistic) output units. Instead, the system had to generate alternative outputs in a variety of spatio-temporal contexts, and to build a model of the often 'painful' consequences. K's only goal information was the activation of a pain input unit whenever it pro0.1 and (XM = 1.0 20 duced an incorrect output. With 1C 1= 3, 1M 1= 4, (Xc out of 30 test runs with the parallel version required less than 1000000 time steps to produce an acceptable solution. = Why does it take much more time solving the reinforcement flip-flop problem than solving the corresponding supervised flip-flop problem? One answer is: With supervised learning the controller gradient is given to the system, while with reinforcement learning the gradient has to be discovered by the system. 2. 'Non-Markovian' pole balancing. A cart pole system was modeled by the same differential equations used for a related balancing task which is described in (Anderson, 1986). In contrast to previous pole balancing tasks, however, no information about temporal derivatives of cart position and pole angle was provided. (Similar experiments are mentioned in (Piche, 1990).) In our experiments the cart-pole system would not stabilize indefinitely. However, significant performance improvement was obtained. The best results were achieved by using a 'perfect model' as described above: Before learning, the average time until failure was about 25 time steps. Within a few hundred trials one could observe trials with more than 1000 time steps balancing time. 'Friendly' initial conditions could lead to balancing times of more than 3000 time steps. 3. 'Markovian' pole balancing with a vector-valued adaptive critic. The adaptive critic extension described above does not need a non-Markovian environment to demonstrate advantages over previous adaptive critics: A four-dimensional adaptive critic was tested on the pole balancing task described in (Anderson, 1986). The critic component had four output units for predicting four different kinds of 'pain', two for bumps against the two edges of the track and two for pole crashes. None of five conducted test runs took more than 750 failures to achieve the first trial with more than 30000 time steps. (The longest run reported by (Anderson, 1986) took about 29000 time steps, more than 7000 failures had to be experienced to achieve that result.) 4 SOME LIMITATIONS OF THE APPROACHES 1. The recurrent network algorithms are not local in space. 2. As with all gradient descent algorithms there is the problem of local minima. This paper does not offer any solutions to this problem. 3. More severe limitations of the algorithm are inherent problems of the concepts of 'gradient descent through time' and adaptive critics. Neither gradient descent nor adaptive critics are practical when there are long time lags between actions and ultimate consequences. For this reason, first steps are made in (Schmidhuber, 1990c) towards adaptive sub-goal generators and adaptive 'causality detectors '. 505 506 Schmidhu ber Acknowledgements I wish to thank Josef Hochreiter and Klaus Bergner who conducted the experiments. This work was supported by a scholarship from SIEMENS AG. References Anderson, C. W. (1986). Learning and Problem Solving with Multilayer Connectionist Systems. PhD thesis, University of Massachusetts, Dept. of Compo and Inr. Sci. Jordan, M. I. (1988). Supervised learning and systems with excess degrees of freedom. Technical Report COINS TR 88-27, MIT. Jordan, M. I. and Jacobs, R. A. (1990). Learning to control an unstable system with forward modeling. In Proc. of the 1990 Connectionist Models Summer School, in press. San Mateo, CA: Morgan Kaufmann. Piche, S. W. (1990). Draft: First order gradient descent training of adaptive discrete time dynamic networks. Technical report, Dept. of Electrical Engineering, Stanford University. Robinson, A. J. and Fallside, F. (1987). The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department. Robinson, T. and Fallside, F. (1989). Dynamic reinforcement driven error propagation networks with application to game playing. In Proceedings of the 11th Conference of the Cognitive Science Society, Ann Arbor, pages 836-843. Schmidhuber, J. H. (1990a). Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. Technical Report FKI-126-90 (revised), Institut fiir Informatik, Technische Universitat Miinchen. (Revised and extended version of an earlier report from February.). Schmidhuber, J. H. (1990b). Networks adjusting networks. Technical Report FKI125-90 (revised), Institut fiir Informatik, Technische Universitat Munchen. (Revised and extended version of an earlier report from February.). Schmidhuber, J. H. (1990c). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Institut fur Informatik, Technische Universitat Miinchen. Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3:9-44. Werbos, P. J. (1987). Building and understanding adaptive systems: A statistical/numerical approach to factory automation and brain research. IEEE Transactions on Systems, Man, and Cybernetics, 17. Williams, R. J. (1988). On the use of backpropagation in associative reinforcement learning. In IEEE International Conference on Neural Networks, San Diego, volume 2, pages 263-270. Williams, R. J. and Zipser, D. (1989). Experimental analysis of the real-time recurrent learning algorithm. Connection Science, 1(1):87-111.
393 |@word trial:3 version:16 termination:1 jacob:2 tr:2 initial:1 past:1 current:8 activation:12 written:1 must:1 explorative:1 numerical:1 visible:2 update:1 stationary:1 deadlock:2 ith:3 indefinitely:1 compo:1 draft:1 miinchen:4 attack:1 five:1 mathematical:1 become:2 differential:2 incorrect:1 consists:1 ewe:1 manner:3 introduce:3 planning:2 nor:1 ol:1 brain:1 discounted:1 td:3 actual:2 pf:3 considering:1 increasing:1 becomes:2 begin:1 provided:1 notation:1 what:1 kind:1 ag:1 temporal:4 every:1 act:2 friendly:1 exactly:1 wrong:1 control:9 unit:40 continually:2 before:1 vie:3 engineering:2 local:6 consequence:4 sutton:2 path:2 might:2 initialization:1 mateo:1 limited:1 lop:1 range:1 directed:1 practical:1 responsible:1 differs:1 backpropagation:1 significantly:1 convenient:1 seeing:1 suggest:1 get:1 cannot:1 s9:2 context:4 instability:3 conventional:3 maximizing:1 williams:8 painful:1 rule:1 updated:5 pt:1 diego:1 approximated:2 werbos:2 predicts:1 enters:1 electrical:1 connected:2 episode:1 yk:20 mentioned:1 environment:21 complexity:1 dynamic:8 trained:2 depend:1 solving:3 serve:1 learner:1 train:1 describe:1 klaus:2 lag:2 spend:1 valued:4 stanford:1 say:1 otherwise:1 associative:1 confronted:1 sequence:1 advantage:3 differentiable:1 net:1 took:2 interaction:1 tu:1 relevant:3 loop:3 realization:2 culty:1 achieve:2 description:3 prj:1 produce:2 perfect:6 cued:1 derive:1 recurrent:9 augmenting:1 school:1 op:3 pii:1 extension:2 credit:2 normal:2 mapping:1 predict:6 bump:1 purpose:1 proc:1 wl:1 mit:1 concurrently:1 always:3 modified:1 ck:2 avoid:1 focus:1 improvement:1 longest:1 fur:1 contrast:1 typically:1 wij:11 germany:1 josef:2 issue:1 augment:1 otc:1 construct:1 look:1 future:1 minimized:2 report:8 connectionist:2 simplify:1 inherent:1 employ:1 bil:1 few:1 randomly:1 relief:1 freedom:1 schmidhu:3 highly:1 inr:1 adjust:1 severe:1 edge:1 partial:2 institut:4 old:3 desired:6 pfj:3 instance:1 modeling:1 earlier:2 markovian:15 disadvantage:1 assignment:2 applicability:1 introducing:1 technische:4 pole:8 hundred:1 conducted:3 universitat:4 reported:1 answer:1 teacher:3 combined:1 international:1 probabilistic:4 again:3 thesis:1 external:3 cognitive:1 derivative:4 leading:3 potential:1 de:1 student:1 wk:2 stabilize:1 automation:1 vi:1 performed:1 try:2 reached:1 wm:6 parallel:11 capability:1 contribution:2 minimize:3 variance:2 who:1 kaufmann:1 fki:2 informatik:5 none:1 cybernetics:1 straight:1 randomness:3 detector:1 whenever:2 failure:3 against:1 involved:1 jiirgen:1 gain:1 adjusting:1 massachusetts:1 knowledge:1 cj:1 sophisticated:1 feed:1 tum:1 supervised:5 execute:2 done:1 evaluated:1 anderson:4 just:1 until:2 lack:1 propagation:2 building:3 effect:1 concept:1 evolution:1 mug:3 game:1 self:1 criterion:1 complete:1 demonstrate:1 interface:2 vectorvalued:1 volume:1 belong:2 accumulate:1 s5:2 significant:1 cambridge:1 had:4 something:2 driven:2 forcing:1 schmidhuber:10 certain:1 yi:6 morgan:1 minimum:4 additional:2 care:1 ii:4 full:2 technical:6 offer:1 long:1 prediction:5 neuro:1 ko:1 muenchen:1 multilayer:1 essentially:1 expectation:1 controller:27 infeng:1 iteration:1 sometimes:1 hochreiter:2 achieved:1 crash:1 source:1 cart:3 jordan:5 zipser:6 vw:1 backwards:1 enough:1 variety:1 affect:2 imperfect:2 idea:1 utility:1 ultimate:1 s7:2 kpred:1 cause:1 compositional:1 action:15 useful:1 listed:1 discount:1 generate:1 delta:1 per:1 track:1 discrete:2 four:3 changing:2 neither:1 sum:2 run:3 angle:1 reporting:1 reader:1 acceptable:1 bound:1 layer:1 summer:1 ope:1 activity:1 ahead:1 kronecker:1 ri:2 performing:1 department:1 according:5 combination:2 em:3 wi:7 lid:2 making:1 invariant:1 equation:2 turn:1 fail:1 needed:1 flip:4 operation:2 observe:1 munchen:1 appropriate:1 save:1 alternative:2 coin:1 running:2 xc:1 scholarship:1 build:1 february:2 society:1 quantity:2 strategy:1 dependence:1 said:2 pain:16 fallside:4 gradient:12 kth:1 pleasure:11 thank:1 sci:1 unstable:1 trivial:1 reason:2 besides:1 modeled:2 relationship:1 providing:2 minimizing:1 executed:2 perform:2 revised:4 descent:6 attacked:1 behave:2 flop:4 extended:2 interacting:3 discovered:1 arbitrary:2 duced:1 ulterior:1 introduced:1 required:2 connection:5 emu:1 robinson:4 address:1 curiosity:2 below:2 dynamical:1 mismatch:1 xm:1 including:1 predicting:1 scheme:1 improve:1 imply:1 kj:1 understanding:1 acknowledgement:1 relative:2 fully:5 expect:1 limitation:2 generator:4 agent:2 degree:1 principle:3 playing:1 critic:13 balancing:7 changed:1 supported:1 last:3 soon:1 tick:2 allow:1 ber:2 absolute:1 distributed:1 boundary:1 world:4 valid:1 cumulative:3 rvalues:2 forward:4 made:1 reinforcement:15 adaptive:21 boredom:1 san:2 dedicate:1 transaction:1 excess:1 fiir:3 spatio:2 search:1 why:1 reality:1 learn:5 ku:1 ca:1 constructing:1 domain:2 main:2 arise:1 causality:1 sub:3 position:1 experienced:1 explicit:1 wish:1 factory:1 forgets:1 sequential:4 ci:1 phd:1 push:1 flavor:1 scalar:1 environmental:6 chance:1 goal:4 ann:1 towards:2 man:1 change:1 piche:2 infinite:1 called:1 arbor:1 experimental:1 siemens:1 ew:2 l4:1 dept:2 tested:1 avoiding:1
3,235
3,930
Sparse Coding for Learning Interpretable Spatio-Temporal Primitives Taehwan Kim TTI Chicago [email protected] Gregory Shakhnarovich TTI Chicago [email protected] Raquel Urtasun TTI Chicago [email protected] Abstract Sparse coding has recently become a popular approach in computer vision to learn dictionaries of natural images. In this paper we extend the sparse coding framework to learn interpretable spatio-temporal primitives. We formulated the problem as a tensor factorization problem with tensor group norm constraints over the primitives, diagonal constraints on the activations that provide interpretability as well as smoothness constraints that are inherent to human motion. We demonstrate the effectiveness of our approach to learn interpretable representations of human motion from motion capture data, and show that our approach outperforms recently developed matching pursuit and sparse coding algorithms. 1 Introduction In recent years sparse coding has become a popular paradigm to learn dictionaries of natural images [10, 1, 4]. The learned representations have proven very effective in computer vision tasks such as image denoising [4], inpainting [10, 8] and object recognition [1]. In these approaches, sparse coding was formulated as the sum of a data fitting term, typically the Frobenius norm, and a regularization term that imposes sparsity. The `1 norm is typically used as it is convex instead of other sparsity penalties such as the `0 pseudo-norm. However, the sparsity induced by these norms is local; The estimated representations are sparse in that most of the activations are zero, but the sparsity has no structure, i.e., there is no preference to which coefficients are active. Mairal et al. [9] extend the sparse coding formulation of natural images to impose structure by first clustering the set of image patches and then learning a dictionary where members of the same cluster are encouraged to share sparsity patterns. In particular, they use group norms so that the sparsity patterns are shared within a group. Here we are interested in the problem of learning dictionaries of human motion. Learning spatiotemporal representations of motion has been addressed in the neuroscience and motor control literature, in the context of motor synergies [13, 5, 14]. However, most approaches have focussed on learning static primitives, such as those obtained by linear subspace models applied to individual frames of motion [12, 15]. One notable exception to this is the work of diAvella et al. [3] where the goal was to recover primitives from time series of EMG signals recorded from a set of frog muscles. Using matching pursuit [11] and an `0 -type regularization as the underlying mechanism to learn primitives, [3] performed matrix factorization of the time series. The recovered factors represent the primitive dictionary and the primitive activations. However, this technique suffers from the inherent limitations of the `0 regularization which is combinatorial in nature and thus difficult to optimize; therefore [3] resorted to a greedy algorithm that is subject to the inherent limitations of such an approach. In this paper we propose to extend the sparse coding framework to learn motion dictionaries. In particular, we cast the problem of learning spatio-temporal primitives as a tensor factorization prob1 lem and introduce tensor group norms over the primitives that encourage sparsity in order to learn the number of elements in the dictionary. The introduction of additional diagonal constraints in the activations, as well as smoothness constraints that are inherent to human motion, will allow us to learn interpretable representations of human motion from motion capture data. As demonstrated in our experiments, our approach outperforms state-of-the-art matching pursuit [3], as well as recently developed sparse coding algorithms [7]. 2 Sparse coding for motion dictionary learning In this section we first review the framework of sparse coding, and then show how to extend this framework to learn interpretable dictionaries of human motion. 2.1 Traditional sparse coding Let Y = [y1 , ? ? ? , yN ] be the matrix formed by concatenating the set of training examples drawn i.i.d. from p(y). Sparse coding is usually formulated as a matrix factorization problem composed of a data fitting term, typically the Frobenius norm, and a regularizer that encourages sparsity of the activations min ||Y ? WH||2F + ??(H) . W,H or equivalently min W,H subject to ||Y ? WH||2F ?(H) ? ?sparse where ? and ?sparse are parameters of the model. Additional bounding constraints on W are typically employed since there is an ambiguity on the scaling of W and H. In this formulation W is the dictionary, with wi the dictionary elements, H is the matrix of activations, and ?(H) is a regularizer that induces sparsity. Solving this problem involves a non-convex optimization. However, solving with respect to W and H alone is convex if ?P is a convex function of H. As a consequence, ? is usually taken to be the `1 norm, i.e., ?(H) = i,j |hi,j |, and an alternate minimization scheme is typically employed [7]. If the problem has more structure, one would like to use this structure in order to learn non-local sparsity patterns. Mairal et al. [9] exploit group norm sparsity priors to learn dictionaries of natural images by first clustering the training image patches, and then learning a dictionary where members of the same cluster P are encouraged to share sparsity patterns. In particular, they use the `2,1 norm defined as ?(H) = k ||hk ||2 , where hk are the elements of H that are members of the k-th group. Note that the members of a group do not need to be rows or columns, more complex group structures can be employed [6]. However, the structure imposed by these group norms is not sufficient for learning interpretable motion primitives. We now show how in the case of motion, we can consider the activations and the primitives as tensors and impose group norm sparsity on the tensors. Moreover, we impose additional constraints such as continuity and differentiability that are inherent of human motion data, as well as diagonal constraints that ensure interpretability. 2.2 Motion dictionary learning Let Y ? <D?L be a D dimensional signal of temporal length L. We formulate the problem of learning dictionaries of human motion as a tensor factorization problem where the matrix W is now a tensor, W ? <D?P ?Q , encoding temporal and spatial information, with D the dimensionality of the observations, P the number of primitives, and Q the length of the primitives. H is now also defined as a tensor, H ? <Q?P ?L , with L the temporal length of the sequence. For simplicity in the discussion we assume that the primitives have the same length. This restriction can be easily removed by setting Q to be the maximum length of the primitives and padding the remaining elements to zero. We thus define the data term to be `data = ||Y ? vec(W)vec(H)||F 2 (2) Convergence of our approach 40 40 30 30 20 20 10 10 0 0 ?10 ?10 ?20 ?20 ?30 200 150 Objective 50 100 ?30 ?40 50 ?40 0 20 40 60 80 100 120 ?50 0 20 40 60 80 100 1 2 3 4 5 6 7 8 9 10 11 Iterations 120 Figure 1: Walking dataset composed of multiple walking cycles performed by the same subject. (left, center) Projection of the data onto the first two principal components of walking. This is the data to be recovered. (right) Training error as a function of the number of iterations. Note that our approach converges after only a few iterations where vec(W) ? <D?P Q and vec(H) ? <QP ?L are projections of the tensors to be represented as matrices, i.e., flattening. When learning dictionaries of human motion, there is additional structure and constraints that one would like the dictionary elements to satisfy. One important property of human motion is that it is smooth. We impose continuity and differentiability constraints by adding a regularization term that PP encourages smooth curvature, i.e., ?(W) = p=1 ||?2 Wp,:,: ||F . One of the main difficulties with learning motion dictionaries is that the dictionary words might have very different temporal lengths. Note that this problem does not arise in traditional dictionary learning of natural images, since the size of the dictionary words is manually specified [4, 1, 9]. This makes the learning problem more complex since one would like to identify not only the number of elements in the dictionary, but also the size of each dictionary word. We address this problem by adding a regularization term that prefers dictionaries with small number of primitives, as well as primitives of short length. In particular, we extend the group norms over matrices to be group norms over tensors and define ? ? ? !q/p ?r/q 1/r Q P D X X X ? ? ? ? |Wi,j,k |p `p,q,r (W) = ? ? i=1 j=1 k=1 where Wi,j,k is the k-th dimension at the j-th time frame of the i-th primitive in W. We will also like to impose additional constraints on the activations H. For interpretability, we would like to have only positive activations. Moreover, since the problem is under-constrained, i.e., H and W can be recovered up to an invertible transformation WH = (WC?1 )(CH), we impose that the elements of the activation tensor should be in the unit interval, i.e., Hi,j,k ? [0, 1]. As in traditional sparse coding, we encourage the activations to be sparse. We impose this by bounding the L1 norm. Finally, to impose interpretability of the results as spatio-temporal primitives, we impose that when a spatio-temporal primitive is active, it should be active across all its time-length with constant activation strength, i.e., ?i, j, k, Hi,j,k = Hi,j+1,k+1 . We thus formulate the problem of learning motion dictionaries as the one of solving the following optimization problem min W,H subject to ||Y ? vec(W)vec(H)||F + ??(W) + ?Lp,q,r (W) ?i, j, k 0 ? Hi,j,k ? 1, Hi,j,k = Hi,j+1,k+1 , ?j X Hi,j,k ? ?train (3a) i,j where ?train , ? and ? are parameters of our model. When optimizing over W or H alone the problem is convex. We thus perform alternate minimizatio. Our algorithm converges to a local minimum, the proof is similar to the convergence proof of block coordinate descent, see Prop. 2.7.1 in [2]. 3 Matching Pursuit Non Refractory Matching Pursuit Non Refractory Matching Pursuit Non Refractory 0.4 0.2 10 0.3 0.1 20 0.2 0 0.1 ?0.1 0 ?0.2 ?0.1 ?0.3 ?0.2 ?0.4 ?0.3 ?0.5 ?0.4 ?0.6 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 90 0 10 (W1-MP-NR) 20 30 40 50 60 70 80 10 90 20 (W2-MP-NR) Matching Pursuit 30 40 60 70 80 90 100 80 90 100 Matching Pursuit Matching Pursuit 1 50 (H-MP-NR) 0.8 0.8 10 0.6 0.6 0.4 20 0.2 30 0 40 0.4 0.2 0 ?0.2 50 ?0.2 ?0.4 60 ?0.4 ?0.6 ?0.6 70 ?0.8 ?0.8 80 ?1 0 10 20 30 40 50 60 70 80 90 ?1 0 10 20 (W1-MP) 30 40 50 60 70 80 10 90 20 30 (W2-MP) Sparse Coding 50 60 70 Sparse Coding Sparse Coding 60 40 (H-MP) 50 8 10 40 6 40 30 20 4 20 20 2 30 10 0 40 0 0 ?2 50 ?10 ?4 ?20 ?20 ?6 60 ?8 ?30 70 ?40 ?10 ?40 80 ?60 0 10 20 30 40 50 60 70 80 90 ?50 0 10 20 (W1-SC) 30 40 50 60 70 80 ?12 10 90 20 30 40 (W2-SC) Our approach 60 70 80 90 100 (H-SC) Our approach Our approach 60 50 40 10 30 40 20 20 20 10 30 0 40 0 ?10 ?20 50 ?20 60 ?30 70 ?40 ?40 80 ?60 0 10 20 30 40 50 60 70 80 90 (W1-Ours) ?50 0 10 20 30 40 50 60 (W2-Ours) 70 80 90 10 20 30 40 50 60 70 80 90 100 (H-Ours) Figure 2: Estimation of W and H when the number of primitives is unknown, using (top) matching pursuit without refractory period, (second row) matching pursuit with refractory period [3], (third row) traditional sparse coding and (bottom) our approach. Note that our approach is able to recover the primitives, their number and the correct activations. Matching pursuit is able to recover the number of primitives when using refractory period, however the activations and the primitives are not correct. When we do not use the refractory period, the recovered primitives are very noisy. Sparse coding has a low reconstruction error, but neither the number of primitives, nor the primitives and the activations are correctly recovered. 3 Experimental Evaluation We compare our algorithm to two state-of-the-art approaches in the task of discovering interpretable primitives from motion capture data, namely, the sparse coding approach of [7] and matching pursuit [3]. In the following, we first describe the baselines in detail. We then demonstrate our method?s ability to estimate the primitives, their number, as well as the activation patterns. We then show that our approach outperforms matching pursuit and sparse coding when learning dictionaries of walking and running motions. For all experiments we set ?train = 1, ?test = 1.3, ? = 1 and ? = 0.05 and use the `2,1,1 norm. Note that similar results where obtained with the `2,2,1 norm. For SC we use ? = 0.01 and c is set to the maximum value of the `2 norm. The threshold for MP with refractory period is set to 0.1. 4 100 Reconstruction error Reconstruction error 150 500 250 200 500 450 MP w/o RP MP w/ RP SC Ours 150 100 450 MP w/o RP MP w/ RP SC Ours 400 350 Reconstruction error 300 MP w/o RP MP w/ RP SC Ours 200 Reconstruction error 250 300 250 MP w/o RP MP w/ RP SC Ours 400 350 300 250 50 50 5 10 15 20 25 0 0 5 Dimension 20 25 180 100 80 60 140 120 100 80 5 10 15 20 Dimension (run, ? 2 = 50, e59D ) 25 20 20 150 25 0 5 10 0 5 15 20 25 20 (walk, ? = 100, eP CA ) 450 MP w/o RP MP w/ RP SC Ours 450 400 350 300 250 MP w/o RP MP w/ RP SC Ours 400 350 300 250 200 0 5 10 15 20 25 Dimension (run, ? 2 = 50, eP CA ) 150 0 5 10 15 20 (run, ? 2 = 100, eP CA ) Matching pursuit (MP): We follow a similar approach to [3] where an alternate minimization over W and H is employed. For each iteration in the alternate minimization, W is optimized by minimizing `data defined in Eq. (2) until convergence. For each iteration in the optimization of H, an over-complete dictionary D is created by taking the primitives in W, and generating candidates by shifting each primitive in time. Note that the cardinality of the candidate dictionary is |D| = P (L + Q ? 1) if W has P primitives and the data is composed of L frames. Once the dictionary is created, a set of primitives is iteratively selected (one at a time) by choosing at each iteration the primitive with the largest scalar product with respect to the residual signal that cannot be explained with the already selected primitives. Primitives are chosen until a threshold on the scalar product is reached. Note that this is an instance of Matching Pursuit [11], a greedy algorithm to solve an `0 -type optimization. Additionally, in the step of choosing elements in the dictionary, [3] introduced the refractory period, which means that when one element in the dictionary is chosen, all overlapping elements are removed from the dictionary. This is done to avoid multiple activations of primitives. In our experiments we compare our approach to matching pursuit with and without refractory period. Sparse coding (SC): We use the sparse coding formulation of [7] which minimizes the Frobenius norm with an L1 regularization penalty on the activations X ? H|| ? F +? ? i,j | min ||Y ? W |H ? H ? W, i,j ?j ? :,j | ? c |W with ? a constant trading off the relative influence of the data fitting term and the regularizer, and c ? and H ? are matrices. Following a constant bounding the value of the primitives. Note that now W [7], we solve this optimization problem alternating between solving with respect to the primitives ? and the activations H. ? W 3.1 Estimating the number of primitives In the first experiment we demonstrate the ability of our approach to infer the number of primitives as well as the length of the existing primitives. For this purpose we created a simple dataset which is composed of a single sequence of multiple walking cycles performed by the same subject from the CMU mocap dataset 1 . We apply PCA to the data reducing the dimensionality of the observations 1 The data was obtained from mocap.cs.cmu.edu 5 25 Dimension Figure 3: Error as a function of the dimension when adding Gaussian noise of variance 50 and 100. (Top) Walking, (bottom) running. subject to 25 500 Dimension (run, ? 2 = 100, e59D ) 15 2 (walk, ? = 50, eP CA ) 150 10 Dimension 200 40 0 15 500 MP w/o RP MP w/ RP SC Ours 60 40 10 550 160 Reconstruction error 120 5 2 (walk, ? = 100, e59D ) 180 MP w/o RP MP w/ RP SC Ours 140 200 0 Dimension 200 160 150 2 (walk, ? = 50, e59D ) 200 Reconstruction error 15 Dimension 2 20 10 Reconstruction error 0 Reconstruction error 0 200 300 400 600 550 100 50 0 20 40 60 80 100 120 300 200 0 20 40 80 100 120 100 50 0 20 40 60 150 0 80 100 Variance (run, d=4, e59D ) 120 300 250 MP w/o RP MP w/ RP SC Ours 200 20 40 60 80 100 120 0 20 40 60 80 100 (walk, d=4, eP CA ) (walk, d=10, eP CA ) 400 900 800 MP w/o RP MP w/ RP SC Ours 700 600 500 400 300 350 250 200 MP w/o RP MP w/ RP SC Ours 200 150 100 0 20 40 60 80 100 MP w/o RP MP w/ RP SC Ours 700 300 120 100 0 20 40 60 80 100 Variance Variance (run, d=10, e59D ) (run, d=4, eP CA ) 600 500 400 300 200 100 120 0 0 20 40 60 80 100 Variance (run, d=10, eP CA ) Figure 4: Error as a function of the Gaussian noise variance for 4D and 10D spaces learned from a dataset composed of a single subject. (Top) walking, (bottom) running. from 59D to 2D for each time instant. Fig. 1 depicts the projections of the data onto the first two principal components as a function of time. In this case it is easy to see that since the motion is periodic, the signal could be represented by a single 2D primitive whose length is equal to the length of the period. To perform the experiments we initialize our approach and the baselines with a sum of random smooth functions (sinusoids) whose frequencies are different from the principal frequency of the periodic training data, and set the number of primitives to P = 2. One primitive is set to have approximately the same length as a cycle of the periodic motion and the other primitive is set to be 50% larger. Note that a rough estimate of the length of the primitives could be easily obtained by analyzing the principal frequencies of the signal. Fig. 2 depicts the results obtained by our approach and the baselines. The first two columns depict the two dimensional primitives recovered (W1 and W2). Each plot represents vec(Wi,:,: ) ? <(Q1 +Q2 )?1 . The dotted black line separates the two primitives. Note that we expect these primitives to be similar to the original signal, i.e., vec(W1,:,: ) similar to a period in Fig. 1 (left) and vec(W2,:,: ) to a period in Fig. 1 (right). The third column depicts the activations vec(H) ? <(Q1 +Q2 )?L recovered. We expect the successful activations to be diagonal, and to appear only once every cycle. Note that our approach is able to recover the number of primitives as well as the primitive themselves and the correct activations. Matching pursuit without refractory period (first row) is not able to recover the primitives, their number, or the activations. Moreover, the estimated signal has high frequencies. Matching pursuit with refractory period (second row) is able to recover the number of primitives, however the activations are underestimated and the primitives are not very accurate. Sparse coding has a low reconstruction error, but neither the primitives, their number, nor the activations are correctly recovered. This confirms the inability of traditional sparse coding to recover interpretable primitives, and the importance of having interpretability constraints such as the refractory period of matching pursuit and our diagonal constraints. Note also that as shown in Fig. 1 (right) our approach converges in a few iterations. 3.2 120 Variance 800 0 50 Variance (walk, d=10, e59D ) Reconstruction error ||V?WH||F Reconstruction error ||V?WH||F 150 0 60 900 200 350 100 0 (walk, d=4, e59D ) MP w/o RP MP w/ RP SC Ours 400 150 Variance 350 250 250 200 Variance 300 300 450 100 Reconstruction error 0 400 Reconstruction error 150 MP w/o RP MP w/ RP SC Ours MP w/o RP MP w/ RP SC Ours 350 Reconstruction error 200 500 Reconstruction error MP w/o RP MP w/ RP SC Ours Reconstruction error ||V?WH||F Reconstruction error ||V?WH||F 500 250 Quantitative analysis and comparisons We evaluate the capabilities of our approach to reconstruct new sequences, and compare our approach to the baselines [3, 7] in a denoising scenario as well as when dealing with missing data. We preprocess the data by applying PCA to reduce the dimensionality of the input space. We measure error by computing the Frobenius norm between the test sequences and the reconstruction given by 6 120 500 400 200 140 120 100 80 MP w/o RP MP w/ RP SC Ours 400 350 300 250 Reconstruction error Reconstruction error 160 1200 350 450 Reconstruction error MP w/o RP MP w/ RP SC Ours 180 MP w/o RP MP w/ RP SC Ours 300 250 200 150 100 1000 Reconstruction error 220 MP w/o RP MP w/ RP SC Ours 800 600 400 60 200 200 50 40 20 0 5 10 15 20 25 150 0 5 Dimension 10 15 20 25 0 0 5 Dimension (run, P=1, e59D ) 10 15 20 25 0 0 5 Dimension (run, P=2, e59D ) 10 15 20 25 Dimension (run, P=1, eP CA ) (run, P=2, eP CA ) Figure 5: Multiple subject error as a function of the dimension for noisy data with variance 100 and different numbers of primitives. As expected one primitive is not enough for accurate reconstruction. 400 650 450 700 600 200 450 400 350 300 MP w/o RP MP w/ RP SC Ours 350 300 250 600 Reconstruction error 250 500 Reconstruction error Reconstruction error 300 MP w/o RP MP w/ RP SC Ours 550 Reconstruction error MP w/o RP MP w/ RP SC Ours 350 400 MP w/o RP MP w/ RP SC Ours 500 400 300 250 200 200 200 150 0 5 10 15 20 Dimension (smooth, Q/2, e59D ) 25 150 0 5 10 15 20 25 Dimension 150 0 5 10 15 20 25 Dimension (random, Q/2, e59D ) (smooth, 2Q/3, e59D ) 100 0 5 10 15 20 (random, 2Q/3, e59D ) Figure 6: Missing data and influence of initialization: Error in the 59D space when Q/2 and 2Q/3 of the data is missing. The primitives are either initialize randomly or to a smooth set of sinusoids of random frequencies. the learned W and the estimated activations Htest 1 epca = ||Vtest ? vec(W)vec(Htest )||F D as well as the error in the original 59D space which can be computed by projecting back into the original space using the singular vectors. Note that W is learned at training, and the activations Htest are estimated at inference time. To evaluate the generalization properties of each algorithm, ? test = Vtest + , we compute both errors in a denoising scenario, where Htest is obtained using V with  i.i.d Gaussian noise, and the errors are computed using the ground truth data Vtest . For each experiment we use P = 1, ? = 0.05, ?train = 1, ?test = 1.3 and a rough estimate of Q, which can be easily obtained by examining the principal frequencies of the data [16]. The primitives are initialized to a sum of sinusoids of random frequencies. We created a walking dataset composed of motions performed by the same subject. In particular we used motions {02, 03, 04, 05, 06, 07, 08, 09, 10, 11} of subject 35 in the CMU mocap dataset. We also performed reconstruction experiments for running motions and used motions {17, 18, 20, 21, 22, 23, 24, 25} from subject 35. In both cases, we use 2 sequences for training and the rest for testing, and report average results over 10 random splits. Fig. 3 depicts reconstruction error in PCA space and in the original space as a function of the noise variance. Fig. 4 depicts reconstruction error as a function of the dimensionality of the PCA space. Our approach outperforms matching pursuit with and without refractory period in all scenarios. Note that out method outperforms sparse coding when the output is noisy. This is due to the fact that, given a big enough dictionary, sparse coding overfits and can perfectly fit the noise. We also performed reconstruction experiments for running motions performed by different subjects. In particular we use motions {03, 04, 05, 06} of subject 9 and motions {21, 23, 24, 25} of subject 35. Fig. 5 depicts reconstruction error for our approach when using different numbers of primitives. As expected one primitive is not enough for accurate reconstruction. When using two primitives our approach performs comparable to sparse coding and clearly outperforms the other baselines. In the next experiment we show the importance of having interpretable primitives. In particular we compare our approach to the baselines in a missing data scenario, where part of the sequence is missing. In particular, Q/2 and 2Q/3 frames are missing. We use the single subject walking database. 7 25 Dimension Influence of P Errors vs. ! 55 40 50 35 45 30 230 25 error error 40 " error with missing data 35 20 15 30 " test error 25 10 0 2 4 6 8 210 200 190 180 " training error 20 15 Reconstruction error 220 5 10 12 0 1 2 3 ? log ! P 4 5 170 ?7 ?6 ?5 ?4 ?3 ?2 ?1 log _ Figure 7: Influence of ? and P on the single subject walking dataset as well as using soft constraints instead of hard constraints on the activations. (left) Our method is fairly insensitive to the choice of ?. As expected the reconstruction error of the training data decreases when there is less regularization. The test error however is very flat, and increases when there is too much or too little regularization. For missing data, having good primitives is important, and thus regularization is necessary. Note that the horizontal axis depicts ? log ?, thus ? decreases for larger values of this axis. (center) Error with (green) and without (red) missing data as a function of P . Our approach is not sensitive to the value of P ; one primitive is enough for accurate reconstruction in this dataset. (right) Error when using solft constraints |Hi,j,k ? Hi,j+1,k+1 | ? ? as a function of ?. The leftmost point corresponds to ? = 0, i.e., Hi,j,k = Hi,j+1,k+1 . As shown in Fig. 6 our approach clearly outperforms all the baselines. This is due to the fact that sparse coding does not have structure, while the structure imposed by our equality constraints, i.e., ?i, j, k Hi,j,k = Hi,j+1,k+1 , help ?hallucinate? the missing data. We also investigate the influence of initialization by using a random non-smooth initialization and the smooth initialization described above, i.e.,sinusoids of random frequencies. Note that as our approach, sparse coding is not sensitive to initialization. This is in contrast with MP which is very sensitive due to the `0 -type regularization. We also investigated the influence of the amount of regularization on W. Towards this end we use the single subject walking dataset, and compute reconstruction error for the training and test data with and without missing data as a function of ?. As shown in Fig. 7 (left) our method is fairly insensitive to the choice of ?. As expected the reconstruction error of the training data decreases when there is less regularization. The test error in the noiseless case is however very flat, and increases slightly when there is too much or too little regularization. When dealing with missing data, having good primitives becomes more important. Note that the horizontal axis depicts ? log ?, thus ? decreases for larger values of the horizontal axis. The test error is higher than the training error for large ? since we use ?train = 1 and ?test = 1.3. Thus we are more conservative at learning since we want to learn interpretable primitives. We also investigate the sensitivity of our approach to the number of primitives. We use the single subject walking dataset and report errors averaged over 10 partitions of the data. As shown in Fig. 7 (middle) our approach is very insensitive to P ; in this example a single primitive is enough for accurate reconstruction. We finally investigate the influence of replacing the hard constraints on the activations by soft constraints |Hi,j,k ? Hi,j+1,k+1 | ? ?. Note that our approach is not sensitive to the value of ? and that the hard constraints ( Hi,j,k = Hi,j+1,k+1 ), depicted in the leftmost point in Fig. 7 (right), are almost optimal. This justifies our choice since when using hard constraints we do not need to search for the optimal value of ?. 4 Conclusion We have proposed a sparse coding approach to learn interpretable spatio-temporal primitives of human motion. We have formulated the problem as a tensor factorization problem with tensor group norm constraints over the primitives, diagonal constraints on the activations, as well as smoothness constraints that are inherent to human motion. Our approach has proven superior to recently developed matching pursuit and sparse coding algorithms in the task of learning interpretable spatiotemporal primitives of human motion from motion capture data. In the future we plan to investigate applying similar techniques to learn spatio-temporal dictionaries of video data such as dynamic textures. 8 References [1] S. Bengio, F Pereira, Y. Singer, and D. Strelow. Group sparse coding. In NIPS, 2009. [2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, Massachusetts, 1999. [3] A. diAvella and E. Bizzi. Shared and specific muscle synergies in natural motor behaviors. PNAS, 102(8):3076?3081, 2005. [4] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. on Image Processing, 15(12):3736?3745, 2006. [5] Z. Ghahramani. Building blocks of movement. Nature, 407:682?683, 2000. [6] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In Proc. AISTATS10, 2010. [7] H. Lee, Alexis Battle, Raina R, and A. Y. Ng. Efficient sparse coding algorithms. In NIPS, 2007. [8] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In ICML, 2009. [9] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Non-local sparse models for image restoration. In ICCV, 2009. [10] J. Mairal, G. Sapiro, and M. Elad. Learning multiscale sparse representations for image and video restoration. SIAM Multiscale Modelling and Simulation., 7(1):214?241, 2008b. [11] S. G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal. Proc. 41, pages 3397?3415, 1993. [12] C. R. Mason, J. E. Gomez, and T. J. Ebner. Hand synergies during reach to grasp. J. of Neurophysiology, 86:2896?2910, 2001. [13] F. A. Mussa-Ivaldi and E. Bizzi. Motor learning: the combination of primitives. Phil. Trans. Royal Society London, Series B, 355:1755?1769, 2000. [14] F. A. Mussa-Ivaldi and S. Solla. Neural primitives for motion control. IEEE Journal on Ocean Engineering, 29(3):640?650, 2004. [15] E. Todorov and Z. Ghahramani. Analysis of the synergies underlying complex hand manipulation. In Proceedings of Conf. of the IEEE Engineering in Medicine and Biology Society, pages 4637?4640, 2004. [16] R. Urtasun, D. J. Fleet, A. Geiger, J. Popovic, T. Darrell, and N. D. Lawrence. Topologically-constrained latent variable models. In ICML, 2008. 9
3930 |@word neurophysiology:1 middle:1 norm:22 confirms:1 simulation:1 q1:2 inpainting:1 ivaldi:2 series:3 ours:27 outperforms:7 existing:1 recovered:8 activation:30 belmont:1 chicago:3 partition:1 motor:4 plot:1 interpretable:12 depict:1 v:1 alone:2 greedy:2 discovering:1 selected:2 hallucinate:1 short:1 preference:1 zhang:1 become:2 htest:4 fitting:3 introduce:1 expected:4 behavior:1 themselves:1 nor:2 little:2 cardinality:1 becomes:1 estimating:1 underlying:2 moreover:3 minimizes:1 q2:2 developed:3 transformation:1 temporal:11 pseudo:1 every:1 quantitative:1 sapiro:3 control:2 unit:1 yn:1 appear:1 bertsekas:1 positive:1 engineering:2 local:4 consequence:1 encoding:1 analyzing:1 approximately:1 might:1 black:1 frog:1 initialization:5 factorization:6 averaged:1 testing:1 block:2 matching:23 projection:3 word:3 onto:2 cannot:1 strelow:1 context:1 influence:7 applying:2 optimize:1 restriction:1 imposed:2 demonstrated:1 center:2 missing:12 phil:1 primitive:77 convex:5 formulate:2 simplicity:1 coordinate:1 mallat:1 programming:1 alexis:1 element:10 recognition:1 walking:12 database:1 bottom:3 ep:10 capture:4 cycle:4 solla:1 decrease:4 removed:2 movement:1 dynamic:1 shakhnarovich:1 solving:4 easily:3 represented:2 regularizer:3 train:5 effective:1 describe:1 london:1 sc:29 choosing:2 whose:2 larger:3 solve:2 elad:2 reconstruct:1 ability:2 noisy:3 online:1 sequence:6 propose:1 reconstruction:40 product:2 frobenius:4 convergence:3 cluster:2 darrell:1 generating:1 tti:3 converges:3 object:1 help:1 eq:1 c:1 involves:1 trading:1 correct:3 human:13 generalization:1 rurtasun:1 ground:1 lawrence:1 dictionary:37 bizzi:2 purpose:1 estimation:1 proc:2 combinatorial:1 sensitive:4 largest:1 minimization:3 rough:2 clearly:2 gaussian:3 avoid:1 ponce:2 modelling:1 hk:2 contrast:1 kim:1 baseline:7 inference:1 typically:5 interested:1 plan:1 art:2 spatial:1 constrained:2 initialize:2 fairly:2 equal:1 once:2 having:4 ng:1 encouraged:2 manually:1 represents:1 biology:1 icml:2 future:1 report:2 inherent:6 few:2 randomly:1 composed:6 individual:1 mussa:2 investigate:4 evaluation:1 grasp:1 accurate:5 encourage:2 necessary:1 walk:8 initialized:1 instance:1 column:3 soft:2 restoration:2 successful:1 examining:1 too:4 emg:1 spatiotemporal:2 gregory:2 periodic:3 sensitivity:1 siam:1 lee:1 off:1 invertible:1 w1:6 ambiguity:1 recorded:1 conf:1 coding:35 coefficient:1 satisfy:1 notable:1 mp:57 performed:7 overfits:1 reached:1 red:1 recover:7 capability:1 formed:1 variance:12 identify:1 preprocess:1 reach:1 suffers:1 pp:1 frequency:9 proof:2 static:1 dataset:10 popular:2 wh:7 massachusetts:1 dimensionality:4 jenatton:1 back:1 higher:1 follow:1 zisserman:1 formulation:3 done:1 until:2 hand:2 horizontal:3 replacing:1 nonlinear:1 overlapping:1 multiscale:2 continuity:2 scientific:1 building:1 regularization:13 equality:1 sinusoid:4 alternating:1 wp:1 iteratively:1 during:1 encourages:2 leftmost:2 complete:1 demonstrate:3 performs:1 motion:37 l1:2 image:12 recently:4 superior:1 qp:1 refractory:14 insensitive:3 extend:5 vec:12 smoothness:3 vtest:3 curvature:1 recent:1 optimizing:1 scenario:4 manipulation:1 muscle:2 minimum:1 additional:5 impose:9 employed:4 paradigm:1 redundant:1 period:14 mocap:3 signal:8 multiple:4 pnas:1 infer:1 smooth:8 bach:3 vision:2 cmu:3 noiseless:1 iteration:7 represent:1 want:1 addressed:1 interval:1 underestimated:1 singular:1 w2:6 rest:1 induced:1 subject:18 member:4 effectiveness:1 split:1 easy:1 enough:5 bengio:1 todorov:1 fit:1 perfectly:1 reduce:1 fleet:1 pca:4 padding:1 penalty:2 prefers:1 amount:1 induces:1 differentiability:2 dotted:1 estimated:4 neuroscience:1 correctly:2 group:14 threshold:2 drawn:1 epca:1 neither:2 resorted:1 year:1 sum:3 run:12 raquel:1 topologically:1 almost:1 patch:2 geiger:1 scaling:1 comparable:1 hi:18 gomez:1 strength:1 constraint:24 flat:2 wc:1 min:4 structured:1 alternate:4 combination:1 battle:1 across:1 slightly:1 wi:4 lp:1 prob1:1 lem:1 explained:1 projecting:1 iccv:1 taken:1 mechanism:1 singer:1 end:1 pursuit:23 aharon:1 apply:1 ocean:1 rp:48 original:4 top:3 clustering:2 ensure:1 remaining:1 running:5 instant:1 medicine:1 exploit:1 ghahramani:2 society:2 tensor:14 objective:1 already:1 diagonal:6 traditional:5 nr:3 subspace:1 separate:1 athena:1 urtasun:2 length:13 minimizing:1 equivalently:1 difficult:1 unknown:1 perform:2 ebner:1 observation:2 descent:1 frame:4 y1:1 ttic:3 introduced:1 cast:1 namely:1 specified:1 optimized:1 learned:5 nip:2 trans:3 address:1 able:5 usually:2 pattern:5 sparsity:13 interpretability:5 green:1 video:2 royal:1 shifting:1 natural:6 difficulty:1 residual:1 raina:1 scheme:1 axis:4 created:4 review:1 literature:1 prior:1 relative:1 expect:2 limitation:2 proven:2 sufficient:1 imposes:1 share:2 row:5 allow:1 focussed:1 taking:1 sparse:43 dimension:19 synergy:4 dealing:2 active:3 mairal:5 popovic:1 spatio:7 search:1 latent:1 additionally:1 learn:14 nature:2 ca:10 investigated:1 complex:3 flattening:1 main:1 bounding:3 noise:5 arise:1 big:1 fig:12 depicts:8 pereira:1 concatenating:1 candidate:2 third:2 specific:1 mason:1 adding:3 importance:2 texture:1 justifies:1 depicted:1 scalar:2 ch:1 corresponds:1 truth:1 prop:1 obozinski:1 goal:1 formulated:4 towards:1 shared:2 hard:4 reducing:1 denoising:4 principal:6 conservative:1 experimental:1 exception:1 inability:1 evaluate:2
3,236
3,931
Moreau-Yosida Regularization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University [email protected] Jieping Ye Computer Science and Engineering Arizona State University [email protected] Abstract We consider the tree structured group Lasso where the structure over the features can be represented as a tree with leaf nodes as features and internal nodes as clusters of the features. The structured regularization with a pre-defined tree structure is based on a group-Lasso penalty, where one group is defined for each node in the tree. Such a regularization can help uncover the structured sparsity, which is desirable for applications with some meaningful tree structures on the features. However, the tree structured group Lasso is challenging to solve due to the complex regularization. In this paper, we develop an efficient algorithm for the tree structured group Lasso. One of the key steps in the proposed algorithm is to solve the Moreau-Yosida regularization associated with the grouped tree structure. The main technical contributions of this paper include (1) we show that the associated Moreau-Yosida regularization admits an analytical solution, and (2) we develop an efficient algorithm for determining the effective interval for the regularization parameter. Our experimental results on the AR and JAFFE face data sets demonstrate the efficiency and effectiveness of the proposed algorithm. 1 Introduction Many machine learning algorithms can be formulated as a penalized optimization problem: min l(x) + ??(x), x (1) where l(x) is the empirical loss function (e.g., the least squares loss and the logistic loss), ? > 0 is the regularization parameter, and ?(x) is the penalty term. Recently, sparse learning via `1 regularization [20] and its various extensions has received increasing attention in many areas including machine learning, signal processing, and statistics. In particular, the group Lasso [1, 16, 22] utilizes the group information of the features, and yields a solution with grouped sparsity. The traditional group Lasso assumes that the groups are non-overlapping. However, in many applications the features may form more complex overlapping groups. Zhao et al. [23] extended the group Lasso to the case of overlapping groups, imposing hierarchical relationships for the features. Jacob et al. [6] considered group Lasso with overlaps, and studied theoretical properties of the estimator. Jenatton et al. [7] considered the consistency property of the structured overlapping group Lasso, and designed an active set algorithm. In many applications, the features can naturally be represented using certain tree structures. For example, the image pixels of the face image shown in Figure 1 can be represented as a tree, where each parent node contains a series of child nodes that enjoy spatial locality; genes/proteins may form certain hierarchical tree structures. Kim and Xing [9] studied the tree structured group Lasso for multi-task learning, where multiple related tasks follow a tree structure. One challenge in the practical application of the tree structured group Lasso is that the resulting optimization problem is much more difficult to solve than Lasso and group Lasso, due to the complex regularization. 1 (a) (b) (c) (d) Figure 1: Illustration of the tree structure of a two-dimensional face image. The 64 ? 64 image (a) can be divided into 16 sub-images in (b) according to the spatial locality, where the sub-images can be viewed as the child nodes of (a). Similarly, each 16 ? 16 sub-image in (b) can be divided into 16 sub-images in (c), and such a process is repeated for the sub-images in (c) to get (d). G01 G 11 G21 G12 G22 G23 G 13 G24 Figure 2: A sample index tree for illustration. Root: G01 = {1, 2, 3, 4, 5, 6, 7, 8}. Depth 1: G11 = {1, 2}, G12 = {3, 4, 5, 6}, G13 = {7, 8}. Depth 2: G21 = {1}, G22 = {2}, G23 = {3, 4}, G24 = {5, 6}. In this paper, we develop an efficient algorithm for the tree structured group Lasso, i.e., the optimization problem (1) with ?(?) being the grouped tree structure regularization (see Equation 2). One of the key steps in the proposed algorithm is to solve the Moreau-Yosida regularization [17, 21] associated with the grouped tree structure. The main technical contributions of this paper include: (1) we show that the associated Moreau-Yosida regularization admits an analytical solution, and the resulting algorithm for the tree structured group Lasso has a time complexity comparable to Lasso and group Lasso, and (2) we develop an efficient algorithm for determining the effective interval for the parameter ?, which is important in the practical application of the algorithm. We have performed experimental studies using the AR and JAFFE face data sets, where the features form a hierarchical tree structure based on the spatial locality as shown in Figure 1. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithm. Note that while the present paper was under review, we became aware of a recent work by Jenatton et al. [8] which applied block coordinate ascent in the dual and showed that the algorithm converges in one pass. 2 Grouped Tree Structure Regularization We begin with the definition of the so-called index tree: Definition 1. For an index tree T of depth d, we let Ti = {Gi1 , Gi2 , . . . , Gini } contain all the node(s) corresponding to depth i, where n0 = 1, G01 = {1, 2, . . . , p} and ni ? 1, i = 1, 2, . . . , d. The nodes satisfy the following conditions: 1) the nodes from the same depth level have non-overlapping indices, i.e., Gij ? Gik = ?, ?i = 1, . . . , d, j 6= k, 1 ? j, k ? ni ; and 2) let Gi?1 j0 be the parent node of a non-root node Gij , then Gij ? Gi?1 . j0 Figure 2 shows a sample index tree. We can observe that 1) the index sets from different nodes may overlap, e.g., any parent node overlaps with its child nodes; 2) the nodes from the same depth level do not overlap; and 3) the index set of a child node is a subset of that of its parent node. The grouped tree structure regularization is defined as: ?(x) = ni d X X wji kxGij k, (2) i=0 j=1 where x ? Rp , wji ? 0 (i = 0, 1, . . . , d, j = 1, 2, . . . , ni ) is the pre-defined weight for the node Gij , k ? k is the Euclidean norm, and xGij is a vector composed of the entries of x with the indices in Gij . 2 In the next section, we study the Moreau-Yosida regularization [17, 21] associated with (2), develop an analytical solution for such a regularization, propose an efficient algorithm for solving (1), and specify the meaningful interval for the regularization parameter ?. 3 Moreau-Yosida Regularization of ?(?) The Moreau-Yosida regularization associated with the grouped tree structure regularization ?(?) for a given v ? Rp is given by: ? ? ni d X ? ? X 1 ?? (v) = min f (x) = kx ? vk2 + ? wji kxGij k , (3) x ? ? 2 i=0 j=1 for some ? > 0. Denote the minimizer of (3) as ?? (v). The Moreau-Yosida regularization has many useful properties: 1) ?? (?) is continuously differentiable despite the fact that ?(?) is non-smooth; 2) ?? (?) is a non-expansive operator. More properties on the general Moreau-Yosida regularization can be found in [5, 10]. Note that, f (?) in (3) is indeed a special case of the problem (1) with l(x) = 12 kx ? vk2 . Our recent study has shown that, the efficient optimization of the MoreauYosida regularization is key to many optimization algorithms [13, Section 2]. Next, we focus on the efficient optimization of (3). For convenience of subsequent discussion, we denote ?ij = ?wji . 3.1 An Analytical Solution We show that the minimization of (3) admits an analytical solution. We first present the detailed procedure for finding the minimizer in Algorithm 1. Algorithm 1 Moreau-Yosida Regularization of the tree structured group Lasso (MYtgLasso ) Input: v ? Rp , the index tree T with nodes Gij (i = 0, 1, . . . , d, j = 1, 2, . . . , ni ) that satisfy Definition 1, the weights wji ? 0 (i = 0, 1, . . . , d, j = 1, 2, . . . , ni ), ? > 0, and ?ij = ?wji Output: u0 ? Rp 1: Set ud+1 = v, (4) 2: for i = d to 0 do 3: for j = 1 to ni do 4: Compute ? 0 kui+1 k ? ?ij ? Gi ? uiGi = j ? ? i kui+1 i k??j G j kui+1 i k G j j ui+1 Gi j kui+1 k > ?ij , Gi (5) j 5: end for 6: end for In the implementation of the MYtgLasso algorithm, we only need to maintain a working variable u, which is initialized with v. We then traverse the index tree T in the reverse breadth-first order to update u. At the traversed node Gij , we update uGij according to the operation in (5), which reduces the Pd Pni |Gij |). Euclidean norm of uGij by at most ?ij . The time complexity of MYtgLasso is O( i=0 j=1 Pni By using Definition 1, we have j=1 |Gij | ? p. Therefore, the time complexity of MYtgLasso is O(pd). If the tree is balanced, i.e., d = O(log p), then the time complexity of MYtgLasso is O(p log p). MYtgLasso can help explain why the structured group sparsity can be induced. Let us ? analyze the tree given in Figure 2, with the solution denoted as x? . We let wji = 1, ?i, j, ? = 2, and v = [1, 2, 1, 1, 4, 4, 1, 1]T . After traversing the nodes of depth 2, we can get that the elements of x? with indices in G21 and G23 are zero; and when the traversal continues to the nodes of depth 1, the elements of x? with indices in G11 and G13 are set to zero, but those with G24 are still nonzero. Finally, after traversing the root node, we obtain x? = [0, 0, 0, 0, 1, 1, 0, 0]T . 3 Next, we show that MYtgLasso finds the exact minimizer of (3). The main result is summarized in the following theorem: Theorem 1. u0 returned by Algorithm 1 is the unique solution to (3). Before giving the detailed proof for Theorem 1, we introduce some notations, and present several technical lemmas. Define the mapping ?ij : Rp ? R as ?ij (x) = kxGij k. (6) We can then express ?(x) defined in (2) as: ?(x) = ni d X X ?ij ?ij (x). i=0 j=1 The subdifferential of f (?) defined in (3) at the point x can be written as: ?f (x) = x ? v + ni d X X ?ij ??ij (x), (7) i=0 j=1 where ??ij (x) = and Gij ? ? ? n o y ? Rp : kyk ? 1, yGi = 0 ? ? ? y ? Rp : yGij = denotes the complementary set of xGi ? =0 j kxGi k , yGij j j if xGij = 0 if xGij 6= 0, (8) Gij . Lemma 1. For any 1 ? i ? d, 1 ? j ? ni , we can find a unique path from the node Gij to the root node G01 . Let the nodes on this path be Glrl , for l = 0, 1, . . . , i with r0 = 1 and ri = j. We have Gij ? Glrl , ?l = 0, 1, . . . , i ? 1. (9) Gij ? Glr = ?, ?r 6= rl , l = 1, 2, . . . , i ? 1, r = 1, 2, . . . , ni . (10) Proof: According to Definition 1, we can find a unique path from the node Gij to the root node G01 . In addition, based on the structure of the index tree, we have (9) and (10). ? Lemma 2. For any i = 1, 2, . . . , d, j = 1, 2, . . . , ni , we have ? ? ? ?ij ??ij (ui ) Gi , uiGi ? ui+1 (11) Gi j j j ??ij (ui ) ? ??ij (u0 ). (12) Proof: We can verify (11) using (5), (6) and (8). For (12), it follows from (6) and (8) that, it is sufficient to verify that u0Gi = ?ji uiGi , for some ?ji ? 0. j (13) j It follows from Lemma 1 that we can find a unique path from Gij to G01 . Denote the nodes on the path as: Glrl , where? l = 0, 1, ? . . . , i, ri = j, and r0 = 1. We first analyze the relationship between ? ? i?1 i?1 ?ui i?1 ? ? ?i?1 . If uiGi and ui?1 i ri?1 , we have uGi?1 = 0, which leads to uGi = 0 by using ? ? G G j j ri?1 ri?1 ? ? ? i ? i?1 ? > ?i?1 u (9). Otherwise, if ? = ri?1 , we have uGi?1 ? Gri?1 ? ri?1 i?1 = ui?1 Gi j ? ? ? ? ? ? i ?u i?1 ???i?1 ri?1 ? Gri?1 ? ? ? ? ? ? ? i ?u i?1 ? ? Gri?1 ? ? ? ? ? ? ? i ?u i?1 ???i?1 ri?1 ? Gri?1 ? ? ? ? ? ? ? i ?u i?1 ? ? Gri?1 ? j uiGi?1 , which leads to ri?1 uiGi by using (9). Therefore, we have j = ?i uiGi , for some ?i ? 0. ui?1 Gi j j 4 (14) By a similar argument, we have ul?1 = ?l ulGlr , ?l ? 0, ?l = 1, 2, . . . , i ? 1. Gl (15) ul?1 = ?l ulGi , ?l ? 0, , ?l = 1, 2, . . . , i ? 1. Gi (16) rl l Together with (9), we have j j From (14) and (16), we show (13) holds with ?ji = ?il=1 ?l . This completes the proof. ? We are now ready to prove our main result: Proof of Theorem 1: It is easy to verify that f (?) defined in (3) is strongly convex, thus it admits a unique minimizer. Our methodology for the proof is to show that 0 ? ?f (u0 ), (17) 0 which is the sufficient and necessary condition for u to be the minimizer of f (?). According to Definition 1, the leaf nodes are non-overlapping. We assume that the union of the leaf nodes equals to {1, 2, . . . , p}; otherwise, we can add to the index tree the additional leaf nodes with weight 0 to satisfy the aforementioned assumption. Clearly, the original index tree and the new index tree with the additional leaf nodes of weight 0 yield the same penalty ?(?) in (2), the same MoreauYosida regularization in (3), and the same solution from Algorithm 1. Therefore, to prove (17), it suffices to show 0 ? ?f (u0 )Gij , for all the leaf nodes Gij . Next, we focus on establishing the following relationship: 0 ? ?f (u0 )Gd1 . (18) It follows from Lemma 1 that, we can find a unique path from the node Gd1 to the root G01 . Let the nodes on this path are Glrl , for l = 0, 1, . . . , d with r0 = 1 and rd = 1. By using (10) of Lemma 1, we can get that the nodes that contain the index set Gd1 are exactly on the aforementioned path. In other words, ?x, we have ? l ? (19) ??r (x) Gd = {0}, ?r 6= rl , l = 1, 2, . . . , d ? 1, r = 1, 2, . . . , ni 1 by using (6) and (8). Applying (11) and (12) of Lemma 2 to each node on the aformetioned path, we have ? ? ? ? ? ulGlr ? ?lrl ??lrl (ul ) Gl ? ?lrl ??lrl (u0 ) Gl , ?l = 0, 1, . . . , d. ul+1 Gl (20) Making using of (9), we obtain from (20) the following relationship: ? ? ul+1 ? ulGd ? ?lrl ??lrl (u0 ) Gd , ?l = 0, 1, . . . , d. Gd (21) rl rl l rl 1 1 1 Adding (21) for l = 0, 1, . . . , d, we have ud+1 ? u0Gd ? Gd 1 1 d X ? ? ?lrl ??lrl (u0 ) Gd l=0 1 (22) It follows from (4), (7), (19) and (22) that (18) holds. Similarly, we have 0 ? f (u0 )Gij for the other leaf nodes Gij . Thus, we have (17). 3.2 ? The Proposed Optimization Algorithm With the analytical solution for ?? (?), the minimizer of (3), we can apply many existing methods for solving (1). First, we show in the following lemma that, the optimal solution to (1) can be computed as a fixed point. We shall show in Section 3.3 that, the result in this lemma can also help determine the interval for the values of ?. Lemma 3. Let x? be an optimal solution to (1). Then, x? satisfies: x? = ??? (x? ? ? l0 (x? )), ?? > 0. 5 (23) Proof: x? is an optimal solution to (1), if and only if 0 ? l0 (x? ) + ???(x? ), (24) which leads to 0 ? x? ? (x? ? ? l0 (x? )) + ?? ??(x? ), ?? > 0. (25) ? Thus, we have x = arg minx 12 kx ? (x? ? ? l0 (x? ))k2 + ?? ?(x). Recall that ?? (?) is the minimizer of (3). We have (23). ? It follows from Lemma 3 that we can apply the fixed point continuation method [4] for solving (1). It is interesting to note that, with an appropriately chosen ? , the scheme in (23) indeed corresponds to the gradient method developed for the composite function optimization [2, 19], achieving the global convergence rate of O(1/k) for k iterations. In addition, the scheme in (23) can be accelerated to obtain the accelerated gradient descent [2, 19], where the Moreau-Yosidea regularization also needs to be evaluated in each of its iteration. We employ the accelerated gradient descent developed in [2] for the optimization in this paper. The algorithm is called ?tgLasso?, which stands for the tree structured group Lasso. Note that, tgLasso includes our previous algorithm [11] as a special case, when the index tree is of depth 1 and w10 = 0. 3.3 The Effective Interval for the Values of ? When estimating the model parameters via (1), a key issue is to choose the appropriate values for the regularization parameter ?. A commonly used approach is to select the regularization parameter from a set of candidate values, whose values, however, need to be pre-specified in advance. Therefore, it is essential to specify the effective interval for the values of ?. An analysis of MYtgLasso in Algorithm 1 shows that, with increasing ?, the entries of the solution to (3) are monotonically decreasing. Intuitively, the solution to (3) shall be exactly zero if ? is sufficiently large and all the entries of x are penalized in ?(x). Next, we summarize the main results of this subsection. Theorem 2. The zero point is a solution to (1) if and only if the zero point is a solution to (3) with v = ?l0 (0). For the penalty ?(x), let us assume that all entries of x are penalized, i.e., ?l ? {1, 2, . . . , p}, there exists at least one node Gij that contains l and meanwhile wji > 0. Then, for any 0 < k ? l0 (0)k < +?, there exists a unique ?max < +? satisfying: 1) if ? ? ?max the zero point is a solution to (1), and 2) if 0 < ? < ?max , the zero point is not a solution to (1). Proof: If x? = 0 is the solution to (1), we have (24). Setting ? = 1 in (23), we obtain that x? = 0 is also the solution to (3) with v = ?l0 (0). If x? = 0 is the solution to (3) with v = ?l0 (0), we have 0 ? l0 (0) + ???(0), which indicates that x? = 0 is the solution to (1). The function ?(x) is closed convex. According to [18, Chapater 3.1.5], ??(0) is a closed convex and non-empty bounded set. From (8), it is clear that 0 ? ??(0). Therefore, we have kxk ? R, ?x ? ??(0), where R is a finite radius constant. Let S = {x : x = ??Rl0 (0)/kl0 (0)k, ? ? [0, 1]} be the line segment from 0 to ?Rl0 (0)/kl0 (0)k. It is obvious that S is closed convex and bounded. T Define I = S ??(0), which is clearly closed convex and bounded. Define ? max = kl0 (0)k/ max kxk. ? x?I ? max > 0. We first show ? ? max < +?. It follows from kl0 (0)k > 0 and the boundedness of I that ? Otherwise, we have I = {0}. Thus, ?? > 0, we have ?l0 (0)/? ? / ??(0), which indicates that 0 is neither the solution to (1) nor (3) with v = ?l0 (0). Recall the assumption that, ?l ? {1, 2, . . . , p}, there exists at least one node Gij that contains l and meanwhile wji > 0. It follows from Algorithm 1 ? < +? such that when ? > ?, ? 0 is a solution to (3) with v = ?l0 (0), leading to that, there exists a ? ? max < +?. Let ?max = ? ? max . The arguments hold since a contradiction. Therefore, we have 0 < ? 1) if ? ? ?max , then ?l0 (0)/? ? I ? ??(0); and 2) if 0 < ? < ?max , then ?l0 (0)/? ? / ??(0). ? When l0 (0) = 0, the problem (1) has a trivial zero solution. We next focus on the nontrivial case l0 (0) 6= 0. We present the algorithm for efficiently solving ?max in Algorithm 2. In Step 1, ?0 is an r initial guess of the solution. Our empirical study shows that ?0 = Pd kl0 (0)k2 Pni i 2 j=1 (wj ) works quite i=0 well. In Step 2-6, we specify an interval [?1 , ?2 ] in which ?max resides. Finally, in Step 7-14, we apply bisection for computing ?max . 6 Algorithm 2 Finding ?max via Bisection Input: l0 (0), the index tree T with nodes Gij (i = 0, 1, . . . , d, j = 1, 2, . . . , ni ), the weights wji ? 0 (i = 0, 1, . . . , d, j = 1, 2, . . . , ni ), ?0 , and ? = 10?10 Output: ?max 1: Set ? = ?0 2: if ?? (?l0 (0)) = 0 then 3: Set ?2 = ?, and find the largest ?1 = 2?i ?, i = 1, 2, . . . such that ??1 (?l0 (0)) 6= 0 4: else 5: Set ?1 = ?, and find the smallest ?2 = 2i ?, i = 1, 2, . . . such that ??2 (?l0 (0)) = 0 6: end if 7: while ?2 ? ?1 ? ? do 2 8: Set ? = ?1 +? 2 0 9: if ?? (?l (0)) = 0 then 10: Set ?2 = ? 11: else 12: Set ?1 = ? 13: end if 14: end while 15: ?max = ? 4 Experiments We have conducted experiments to evaluate the efficiency and effectiveness of the proposed tgLasso algorithm on the face data sets JAFFE [14] and AR [15]. JAFFE contains 213 images of ten Japanese actresses with seven facial expressions: neutral, happy, disgust, fear, anger, sadness, and suprise. We used a subsect of AR that contains 400 images corresponding to 100 subjects, with each subject containing four facial expression: neutral, smile, anger, and scream. For both data sets, we resize the image size to 64 ? 64, and make use of the tree structure depicted in Figure 1. Our task is to discriminate each facial expression from the rest ones. Thus, we have seven and four binary classification tasks for JAFFE and AR, respectively. We employ the least squares loss for l(?), and set the regularization parameter ? = r ? ?max , where ?max is computed using Algorithm 2, and r = {5 ? 10?1 , 2 ? 10?1 , 1 ? 10?1 , 5 ? 10?2 , 2 ? 10?2 , 1 ? 10?2 , 5 ? 10?3 , 2 ? 10?3 }. The source codes, included in the SLEP package [12], are available online1 . Table 1: Computational time (seconds) for one binary classification task (averaged over 7 and 4 runs for JAFFE and AR, respectively). The total time for all eight regularization parameters is reported. tgLasso alternating algorithm [9] JAFFE 30 4054 AR 73 5155 Efficiency of the Proposed tgLasso We compare our proposed tgLasso with the recently proposed alternating algorithm [9] designed for the tree-guided group Lasso. We report the total computational time (seconds) for running one binary classification task (averaged over 7 and 4 tasks for JAFFE and AR, respectively) corresponding to the eight regularization parameters in Table 1. We can obseve that tgLasso is much more efficient than the alternating algorithm. We note that, the key step of tgLasso in each iteration is the associated Moreau-Yosida regularization, which can be efficiently computed due to the existence of an analytical solution; and the key step of the alternating algorithm in each iteration is the matrix inversion, which does not scale well to high-dimensional data. Classification Performance We compare the classification performance of tgLasso with Lasso. On AR, we use 50 subjects for training, and the remaining 50 subjects for testing; and on JAFFE, we use 8 subjects for training, and the remaining 2 subjects for testing. This subject-independent setting is challenging, as the subjects to be tested are not included in the training set. The reported results are averaged over 10 runs for randomly chosen subjects. For each binary classification task, we compute the balanced error rate [3] to cope with the unbalanced positive and negative samples. We 1 http://www.public.asu.edu/?jye02/Software/SLEP/ 7 JAFFE AR 26 tgLasso Lasso 39.5 balanced error rate (%) 25 balanced error rate (%) 40 tgLasso Lasso 24 23 22 21 20 39 38.5 38 37.5 37 36.5 19 36 5e?1 2e?1 1e?1 5e?2 2e?2 1e?2 5e?3 2e?3 regularization parameter r 18 5e?1 2e?1 1e?1 5e?2 2e?2 1e?2 5e?3 2e?3 regularization parameter r Figure 3: Classification performance comparison between Lasso and the tree structured group Lasso. The horizontal axis corresponds to different regularization parameters ? = r ? ?max . Neutral Smile Anger Sceam Figure 4: Markers obtained by Lasso, and tree structured group Lasso (white pixels correspond to the markers). First row: face images of four expression from the AR data set; Second row: the markers identified by tree structured group Lasso; Third row: the markers identified by Lasso. report the averaged results in Figure 3. Results show that tgLasso outperforms Lasso in both cases. This verifies the effectiveness of tgLasso in incorporating the tree structure in the formulation, i.e., the spatial locality information of the face images. Figure 4 shows the markers identified by tgLasso and Lasso under the best regularization parameter. We can observe from the figure that tgLasso results in a block sparsity solution, and most of the selected pixels are around mouths and eyes. 5 Conclusion In this paper, we consider the efficient optimization for the tree structured group Lasso. Our main technical result show the Moreau-Yosida regularization associated with the tree structured group Lasso admits an analytical solution. Based on the Moreau-Yosida regularization, we an design efficient algorithm for solving the grouped tree structure regularized optimization problem for smooth convex loss functions, and develop an efficient algorithm for determining the effective interval for the parameter ?. Our experimental results on the AR and JAFFE face data sets demonstrate the efficiency and effectiveness of the proposed algorithm. We plan to apply the proposed algorithm to other applications in computer vision and bioinformatics involving the tree structure. Acknowledgments This work was supported by NSF IIS-0612069, IIS-0812551, CCF-0811790, IIS-0953662, NGA HM1582-08-1-0016, NSFC 60905035, 61035003, and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the US Army. 8 References [1] F. Bach, G. Lanckriet, and M. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In International conference on Machine learning, 2004. [2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009. [3] I. Guyon, A. B. Hur, S. Gunn, and G. Dror. Result analysis of the nips 2003 feature selection challenge. In Neural Information Processing Systems, pages 545?552, 2004. [4] E.T. Hale, W. Yin, and Y. Zhang. Fixed-point continuation for `1 -minimization: Methodology and convergence. SIAM Journal on Optimization, 19(3):1107?1130, 2008. [5] J. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms I & II. Springer Verlag, Berlin, 1993. [6] L. Jacob, G. Obozinski, and J. Vert. Group lasso with overlap and graph lasso. In International Conference on Machine Learning, 2009. [7] R. Jenatton, J.-Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. Technical report, arXiv:0904.3523v2, 2009. [8] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In International Conference on Machine Learning, 2010. [9] S. Kim and E. P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In International Conference on Machine Learning, 2010. [10] C. Lemar?echal and C. Sagastiz?abal. Practical aspects of the Moreau-Yosida regularization I: Theoretical properties. SIAM Journal on Optimization, 7(2):367?385, 1997. [11] J. Liu, S. Ji, and J. Ye. Multi-task feature learning via efficient `2,1 -norm minimization. In Uncertainty in Artificial Intelligence, 2009. [12] J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State University, 2009. [13] J. Liu, L. Yuan, and J. Ye. An efficient algorithm for a class of fused lasso problems. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2010. [14] M. J. Lyons, J. Budynek, and S. Akamatsu. Automatic classification of single facial images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(12):1357?1362, 1999. [15] A.M. Martinez and R. Benavente. The AR face database. Technical report, 1998. [16] L. Meier, S. Geer, and P. B?uhlmann. The group lasso for logistic regression. Journal of the Royal Statistical Society: Series B, 70:53?71, 2008. [17] J.-J. Moreau. Proximit?e et dualit?e dans un espace hilbertien. Bulletin de la Societe mathematique de France, 93:273?299, 1965. [18] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 2004. [19] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE Discussion Paper, 2007. [20] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 58(1):267?288, 1996. [21] K. Yosida. Functional Analysis. Springer Verlag, Berlin, 1964. [22] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal Of The Royal Statistical Society Series B, 68(1):49?67, 2006. [23] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical variable selection. Annals of Statistics, 37(6A):3468?3497, 2009. 9
3931 |@word inversion:1 norm:4 jacob:2 boundedness:1 initial:1 liu:5 contains:5 series:4 outperforms:1 existing:1 written:1 subsequent:1 designed:2 update:2 n0:1 intelligence:4 asu:3 leaf:7 guess:1 kyk:1 selected:1 core:1 node:42 traverse:1 zhang:1 director:1 yuan:2 prove:2 introductory:1 introduce:1 indeed:2 nor:1 multi:3 decreasing:1 lyon:1 increasing:2 begin:1 estimating:1 notation:1 bounded:3 project:1 dror:1 developed:2 finding:2 ti:1 exactly:2 k2:2 enjoy:1 before:1 positive:1 engineering:2 akamatsu:1 despite:1 nsfc:1 establishing:1 path:9 studied:2 sadness:1 challenging:2 averaged:4 practical:3 unique:7 acknowledgment:1 testing:2 union:1 block:2 procedure:1 uigi:7 j0:2 area:1 empirical:2 g13:2 composite:3 vert:1 projection:1 pre:3 word:1 protein:1 get:3 convenience:1 selection:5 operator:1 applying:1 www:1 jieping:2 attention:1 convex:8 xgi:1 contradiction:1 estimator:1 rocha:1 coordinate:1 annals:1 exact:1 lanckriet:1 element:2 satisfying:1 continues:1 gunn:1 database:1 wj:1 balanced:4 yosida:16 pd:3 complexity:4 ui:8 nesterov:2 traversal:1 solving:5 segment:1 kxgi:1 efficiency:5 proximit:1 represented:3 various:1 fast:1 effective:5 gini:1 artificial:1 whose:1 quite:1 solve:4 otherwise:3 statistic:2 gi:10 hilbertien:1 differentiable:1 analytical:8 propose:1 hiriart:1 dans:1 inducing:1 parent:4 cluster:1 convergence:2 empty:1 converges:1 help:3 develop:6 ij:16 received:1 guided:2 radius:1 g22:2 gi2:1 public:1 glrl:4 mathematique:1 suffices:1 gd1:3 traversed:1 extension:1 hold:3 sufficiently:1 considered:2 around:1 mapping:1 abal:1 dictionary:1 smallest:1 estimation:1 uhlmann:1 grouped:11 largest:1 minimization:4 clearly:2 shrinkage:2 office:1 l0:20 focus:3 indicates:2 expansive:1 sigkdd:1 kim:2 vk2:2 scream:1 france:1 pixel:3 arg:1 dual:1 aforementioned:2 issue:1 denoted:1 classification:8 plan:1 spatial:4 special:2 equal:1 aware:1 yu:1 anger:3 espace:1 report:4 employ:2 randomly:1 composed:1 national:1 beck:1 maintain:1 mining:1 glr:1 ugi:3 necessary:1 facial:4 traversing:2 tree:51 euclidean:2 initialized:1 theoretical:2 teboulle:1 ar:13 subset:1 entry:4 neutral:3 conducted:1 slep:3 reported:2 proximal:1 gd:5 international:4 siam:3 together:1 continuously:1 fused:1 benavente:1 containing:1 choose:1 zhao:2 leading:1 de:2 summarized:1 includes:1 satisfy:3 audibert:1 performed:1 root:6 kl0:5 closed:4 analyze:2 xing:2 contribution:2 square:2 ni:16 il:1 g23:3 became:1 efficiently:2 yield:2 correspond:1 bisection:2 explain:1 definition:6 obvious:1 naturally:1 associated:8 proof:8 recall:2 subsection:1 hur:1 knowledge:1 uncover:1 jenatton:4 follow:1 methodology:2 specify:3 formulation:1 evaluated:1 strongly:1 working:1 ygi:1 horizontal:1 overlapping:6 marker:5 jaffe:11 logistic:2 ye:5 contain:2 verify:3 ccf:1 regularization:40 alternating:4 nonzero:1 white:1 demonstrate:3 image:15 recently:2 functional:1 rl:6 ji:5 kluwer:1 imposing:1 rd:1 automatic:1 consistency:1 similarly:2 add:1 recent:2 showed:1 reverse:1 certain:2 verlag:2 binary:4 wji:10 additional:2 r0:3 determine:1 ud:2 monotonically:1 signal:1 u0:10 ii:4 multiple:2 desirable:1 reduces:1 smooth:2 technical:6 academic:1 bach:3 lin:1 divided:2 involving:1 regression:4 basic:1 vision:1 arxiv:1 iteration:4 kernel:1 subdifferential:1 addition:2 interval:8 completes:1 else:2 source:1 publisher:1 appropriately:1 rest:1 ascent:1 induced:1 subject:9 effectiveness:5 smile:2 jordan:1 easy:1 gri:5 lasso:39 identified:3 expression:4 ul:5 penalty:5 returned:1 useful:1 detailed:2 clear:1 ten:1 continuation:2 http:1 nsf:1 tibshirani:1 shall:2 express:1 group:32 key:6 four:3 actress:1 achieving:1 neither:1 breadth:1 imaging:1 graph:1 nga:1 run:2 package:1 inverse:1 uncertainty:1 disgust:1 family:1 guyon:1 utilizes:1 resize:1 comparable:1 arizona:3 activity:1 nontrivial:1 ri:10 software:1 aspect:1 argument:2 min:2 structured:20 according:5 making:1 online1:1 intuitively:1 equation:1 urruty:1 end:5 available:1 operation:1 apply:4 observe:2 hierarchical:5 eight:2 appropriate:1 v2:1 rp:7 existence:1 original:1 assumes:1 denotes:1 include:2 running:1 remaining:2 giving:1 society:3 objective:1 traditional:1 minx:1 gradient:4 berlin:2 seven:2 trivial:1 g21:3 code:1 index:19 relationship:4 illustration:2 happy:1 minimizing:1 difficult:1 negative:1 implementation:1 design:1 finite:1 descent:2 extended:1 meier:1 specified:1 smo:1 nip:1 pattern:1 sparsity:6 challenge:2 summarize:1 including:1 max:21 royal:3 mouth:1 overlap:5 regularized:1 advanced:1 scheme:2 eye:1 conic:1 axis:1 ready:1 jun:1 dualit:1 review:1 discovery:1 determining:3 loss:5 lecture:1 interesting:1 sufficient:2 thresholding:1 row:3 echal:2 sagastiz:1 penalized:3 course:1 gl:4 supported:1 pni:3 face:9 bulletin:1 hm1582:1 absolute:1 sparse:3 moreau:17 depth:9 stand:1 resides:1 commonly:1 cope:1 transaction:1 gene:1 global:1 active:1 mairal:1 un:1 iterative:1 why:1 table:2 kui:4 complex:3 meanwhile:2 japanese:1 main:6 iarpa:1 martinez:1 verifies:1 child:4 repeated:1 complementary:1 sub:5 candidate:1 third:1 theorem:5 g11:2 hale:1 admits:5 gik:1 jye02:1 essential:1 exists:4 incorporating:1 adding:1 kx:3 locality:4 depicted:1 yin:1 army:1 gi1:1 kxk:2 lrl:8 fear:1 springer:2 corresponds:2 minimizer:7 satisfies:1 acm:1 w10:1 obozinski:2 viewed:1 formulated:1 g12:2 lemar:2 included:2 lemma:11 geer:1 called:2 gij:23 pas:1 discriminate:1 experimental:4 g01:7 total:2 meaningful:2 duality:1 la:1 select:1 internal:1 unbalanced:1 bioinformatics:1 accelerated:3 evaluate:1 tested:1
3,237
3,932
Transduction with Matrix Completion: Three Birds with One Stone Andrew B. Goldberg1 , Xiaojin Zhu1 , Benjamin Recht1 , Jun-Ming Xu1 , Robert Nowak2 Department of {1 Computer Sciences, 2 Electrical and Computer Engineering} University of Wisconsin-Madison, Madison, WI 53706 {goldberg, jerryzhu, brecht, xujm}@cs.wisc.edu, [email protected] Abstract We pose transductive classification as a matrix completion problem. By assuming the underlying matrix has a low rank, our formulation is able to handle three problems simultaneously: i) multi-label learning, where each item has more than one label, ii) transduction, where most of these labels are unspecified, and iii) missing data, where a large number of features are missing. We obtained satisfactory results on several real-world tasks, suggesting that the low rank assumption may not be as restrictive as it seems. Our method allows for different loss functions to apply on the feature and label entries of the matrix. The resulting nuclear norm minimization problem is solved with a modified fixed-point continuation method that is guaranteed to find the global optimum. 1 Introduction Semi-supervised learning methods make assumptions about how unlabeled data can help in the learning process, such as the manifold assumption (data lies on a low-dimensional manifold) and the cluster assumption (classes are separated by low density regions) [4, 16]. In this work, we present two transductive learning methods under the novel assumption that the feature-by-item and label-by-item matrices are jointly low rank. This assumption effectively couples different label prediction tasks, allowing us to implicitly use observed labels in one task to recover unobserved labels in others. The same is true for imputing missing features. In fact, our methods learn in the difficult regime of multi-label transductive learning with missing data that one sometimes encounters in practice. That is, each item is associated with many class labels, many of the items? labels may be unobserved (some items may be completely unlabeled across all labels), and many features may also be unobserved. Our methods build upon recent advances in matrix completion, with efficient algorithms to handle matrices with mixed real-valued features and discrete labels. We obtain promising experimental results on a range of synthetic and real-world data. 2 Problem Formulation Let x1 . . . xn ? Rd be feature vectors associated with n items. Let X = [x1 . . . xn ] be the d ? n feature matrix whose columns are the items. Let there be t binary classification tasks, y1 . . . yn ? {?1, 1}t be the label vectors, and Y = [y1 . . . yn ] be the t ? n label matrix. Entries in X or Y can be missing at random. Let ?X be the index set of observed features in X, such that (i, j) ? ?X if and only if xij is observed. Similarly, let ?Y be the index set of observed labels in Y. Our main goal is to predict the missing labels yij for (i, j) ? / ?Y . Of course, this reduces to standard transductive learning when t = 1, |?X | = nd (no missing features), and 1 < |?Y | < n (some missing labels). In our more general setting, as a side product we are also interested in imputing the missing features, and de-noising the observed features, in X. 1 2.1 Model Assumptions The above problem is in general ill-posed. We now describe our assumptions to make it a welldefined problem. In a nutshell, we assume that X and Y are jointly produced by an underlying low rank matrix. We then take advantage of the sparsity to fill in the missing labels and features using a modified method of matrix completion. Specifically, we assume the following generative story. It starts from a d ? n low rank ?pre?-feature matrix X0 , with rank(X0 )  min(d, n). The actual feature matrix X is obtained by adding iid Gaussian noise to the entries of X0 : X = X0 + ,  0 0 > where ij ? N (0, ?2 ). Meanwhile, the t ?soft? labels y1j . . . ytj ? yj0 ? Rt of item j are 0 0 produced by yj = Wxj + b, where W is a t ? d weight matrix, and b ? Rt is a bias vector. Let     Y0 = y10 . . . yn0 be thesoft label matrix. Note the combined (t + d) ? n matrix Y0 ; X0 is low rank too: rank( Y0 ; X0 ) ? rank(X0 ) + 1. The actual label  yij ? {?1, 1} is generated randomly 0 0 ) = 1/ 1 + exp(?yij yij ) . Finally, two random masks ?X , ?Y via a sigmoid function: P (yij |yij are applied to expose only some of the entries in X and Y, and we use ? to denote the percentage of observed entries. This generative story may seem restrictive, but our approaches based on it perform well on synthetic and real datasets, outperforming several baselines with linear classifiers. 2.2 Matrix Completion for Heterogeneous Matrix Entries With the above data generation model, our task can be defined as follows. Given the partially observed features and labels as specified by X, Y, ?X , ?Y , we would like to recover the intermediate low rank matrix Y0 ; X0 . Then, X0 will contain the denoised and completed features, and sign(Y0 ) will contain the completed and correct labels.   The key assumption is that the (t + d) ? n stacked matrix Y0 ; X0 is of low rank. We will start from a ?hard? formulation that is illustrative but impractical, then relax it. argmin rank(Z) (1) Z?R(t+d)?n sign(zij ) = yij , ?(i, j) ? ?Y ; z(i+t)j = xij , ?(i, j) ? ?X  0 0 Here, Z is meant to recover Y ; X by directly minimizing the rank while obeying the observed features and labels. Note the indices (i, j) ? ?X are with respect to X, such that i ? {1, . . . , d}. To index the corresponding element in the larger stacked matrix Z, we need to shift the row index by t to skip the part for Y0 , and hence the constraints z(i+t)j = xij . The above formulation assumes that there is no noise in the generation processes X0 ? X and Y0 ? Y. Of course, there are several issues with formulation (1), and we handle them as follows: s.t. ? rank() is a non-convex function and difficult to optimize. Following recent work in matrix completion [3, 2], we relax rank() with the convex nuclear norm kZk? = Pmin(t+d,n) ?k (Z), where ?k ?s are the singular values of Z. The relationship between k=1 rank(Z) and kZk? is analogous to that of `0 -norm and `1 -norm for vectors. ? There is feature noise from X0 to X. Instead of the equality constraints in (1), we minimize a loss function cx (z(i+t)j , xij ). We choose the squared loss cx (u, v) = 12 (u ? v)2 in this work, but other convex loss functions are possible too. ? Similarly, there is label noise from Y0 to Y. The observed labels are of a different type than the observed features. We therefore introduce another loss function cy (zij , yij ) to account for the heterogeneous data. In this work, we use the logistic loss cy (u, v) = log(1 + exp(?uv)). In addition to these changes, we will model the bias b either explicitly or implicitly, leading to two alternative matrix completion formulations below. Formulation 1 (MC-b). In this formulation, we explicitly optimize the bias b ? Rt inaddition to (t+d)?n Z , hence the name. Here, Z corresponds to the stacked matrix WX0 ; X0 instead of  ?0 R 0  Y ; X , making it potentially lower rank. The optimization problem is X X ? 1 argmin ?kZk? + cy (zij + bi , yij ) + cx (z(i+t)j , xij ), (2) |?Y | |?X | Z,b (i,j)??Y (i,j)??X 2 where ?, ? are positive trade-off weights. Notice the bias b is not regularized. This is a convex problem, whose optimization procedure will be discussed in section 3. Once the optimal Z, b are found, we recover the task-i label of item j by sign(zij + bi ), and feature k of item j by z(k+t)j . Formulation 2 (MC-1). In this formulation, the bias is modeled implicitly within Z. Similar to how bias is commonly handled in linear classifiers, we append an additional feature with   constant value one to each item. The corresponding pre-feature matrix is augmented into X0 ; 1> , where 1 is the all-1 vector. Under the same label assumption yj0 = Wx0j + b, the rows of the soft label matrix       Y0 are linear combinations of rows in X0 ; 1> , i.e., rank(Y0 ; X0 ; 1> ) = rank( X0 ; 1> ). We then let Z correspond to the (t + d + 1) ? n stacked matrix Y0 ; X0 ; 1> , by forcing its last row to be 1> (hence the name): X X 1 ? cy (zij , yij ) + cx (z(i+t)j , xij ) (3) argmin ?kZk? + |?Y | |?X | Z?R(t+d+1)?n (i,j)??Y (i,j)??X > s.t. z(t+d+1)? = 1 . This is a constrained convex optimization problem. Once the optimal Z is found, we recover the task-i label of item j by sign(zij ), and feature k of item j by z(k+t)j . MC-b and MC-1 differ mainly in what is in Z, which leads to different behaviors of the nuclear norm. Despite the generative story, we do not explicitly recover the weight matrix W in these formulations.  Other formulations are certainly possible. One way is to let Z correspond to Y0 ; X0 directly, without introducing bias b or the all-1 row, and hope nuclear norm minimization will prevail. This is inferior in our preliminary experiments, and we do not explore it further in this paper. 3 Optimization Techniques We solve MC-b and MC-1 using modifications of the Fixed Point Continuation (FPC) method of Ma, Goldfarb, and Chen [10].1 While nuclear norm minimization can be converted into a semidefinite programming (SDP) problem [2], current SDP solvers are severely limited in the size of problems they can solve. Instead, the basic fixed point approach is a computationally efficient alternative, which provably converges to the globally optimal solution and has been shown to outperform SDP solvers in terms of matrix recoverability. 3.1 Fixed Point Continuation for MC-b We first describe our modified FPC method for MC-b. It differs from [10] in the extra bias variables and multiple loss functions. Our fixed point iterative algorithm to solve the unconstrained problem of (2) consists of two alternating steps for each iteration k: 1. (gradient step) bk+1 = bk ? ?b g(bk ), Ak = Zk ? ?Z g(Zk ) 2. (shrinkage step) Zk+1 = S?Z ? (Ak ). In the gradient step, ?b and ?Z are step sizes whose choice will be discussed next. Overloading notation a bit, g(bk ) is the vector gradient, and g(Zk ) is the matrix gradient, respectively, of the two loss terms in (2) (i.e., excluding the nuclear norm term): X ? ?yij g(bi ) = (4) |?Y | 1 + exp(yij (zij + bi )) j:(i,j)??Y ? ? ?yij ? |?Y | 1+exp(yij (zij +bi )) , i ? t and (i, j) ? ?Y 1 g(zij ) = (5) (z ? x(i?t)j ), i > t and (i ? t, j) ? ?X ? |?X | ij 0, otherwise Note for g(zij ), i > t, we need to shift down (un-stack) the row index by t in order to map the element in Z back to the item x(i?t)j . 1 While the primary method of [10] is Fixed Point Continuation with Approximate Singular Value Decomposition (FPCA), where the approximate SVD is used to speed up the algorithm, we opt to use an exact SVD for simplicity and will refer to the method simply as FPC. 3 Input: Initial matrix Z0 , Input: Initial matrix Z0 , bias b0 , parameters ?, ?, Step sizes ?Z parameters ?, ?, Step sizes ?b , ?Z Determine ?1 > ?2 > ? ? ? > ?L = ? > 0. Determine ?1 > ?2 > ? ? ? > ?L = ? > 0. Set Z = Z0 . Set Z = Z0 , b = b0 . foreach ? = ?1 , ?2 , . . . , ?L do foreach ? = ?1 , ?2 , . . . , ?L do while Not converged do while Not converged do Compute A = Z ? ?Z g(Z) Compute b = b ? ?b g(b), A = Z ? ?Z g(Z) Compute SVD of A = U?V> > Compute SVD of A = U?V Compute Z = U max(? ? ?Z ?, 0)V> Compute Z = U max(? ? ?Z ?, 0)V> Project Z to feasible region z(t+d+1)? = 1> end end end end Output: Recovered matrix Z, bias b Output: Recovered matrix Z Algorithm 1: FPC algorithm for MC-b. Algorithm 2: FPC algorithm for MC-1. In the shrinkage step, S?Z ? (?) is a matrix shrinkage operator. Let Ak = U?V> be the SVD of Ak . Then S?Z ? (Ak ) = U max(? ? ?Z ?, 0)V> , where max is elementwise. That is, the shrinkage operator shifts the singular values down, and truncates any negative values to zero. This step reduces the nuclear norm. Even though the problem is convex, convergence can be slow. We follow [10] and use a continuation or homotopy method to improve the speed. This involves beginning with a large value ?1 > ? and solving a sequence of subproblems, each with a decreasing value and using the previous solution as its initial point. The sequence of values is determined by a decay parameter ?? : ?k+1 = max{?k ?? , ?}, k = 1, . . . , L ? 1, where ? is the final value to use, and L is the number of rounds of continuation. The complete FPC algorithm for MC-b is listed in Algorithm 1. A minor modification of the argument in [10] reveals that as long as we choose non-negative step sizes satisfying ?b < 4|?Y |/(?n) and ?Z < min {4|?Y |/?, |?X |}, the algorithms MC-b will be guaranteed to converge to a global optimum. Indeed, to guarantee convergence, we only need that the gradient step is non-expansive in the sense that kb1 ??b g(b1 )?b2 +?b g(b2 )k2 +kZ1 ??Z g(Z1 )?Z2 +?Z g(Z2 )k2F ? kb1 ?b2 k2 +kZ1 ?Z2 k2F for all b1 , b2 , Z1 , and Z2 . Our choice of ?b and ?Z guarantee such non-expansiveness. Once this non-expansiveness is satisfied, the remainder of the convergence analysis is the same as in [10]. 3.2 Fixed Point Continuation for MC-1 Our modified FPC method for MC-1 is similar except for two differences. First, there is no bias variable b. Second, the shrinkage step will in general not satisfy the all-1-row constraints in (3). Thus, we add a third projection step at the end of each iteration to project Zk+1 back to the feasible region, by simply setting its last row to all 1?s. The complete algorithm for MC-1 is given in Algorithm 2. We were unable to prove convergence for this gradient + shrinkage + projection algorithm. Nonetheless, in our empirical experiments, Algorithm 2 always converges and tends to outperform MC-b. The two algorithms have about the same convergence speed. 4 Experiments We now empirically study the ability of matrix completion to perform multi-class transductive classification when there is missing data. We first present a family of 24 experiments on a synthetic task by systematically varying different aspects of the task, including the rank of the problem, noise level, number of items, and observed label and feature percentage. We then present experiments on two real-world datasets: music emotions and yeast microarray. In each experiments, we compare MC-b and MC-1 against four other baseline algorithms. Our results show that MC-1 consistently outperforms other methods, and MC-b follows closely. Parameter Tuning and Other Settings for MC-b and MC-1: To tune the parameters ? and ?, we use 5-fold cross validation (CV) separately for each experiment. Specifically, we randomly 4 divide ?X and ?Y into five disjoint subsets each. We then run our matrix completion algorithms using 45 of the observed entries, measure its performance on the remaining 15 , and average over the five folds. Since our main goal is to predict unobserved labels, we use label error as the CV performance criterion to select parameters. Note that tuning ? is quite efficient since all values under consideration can be evaluated in one run of the continuation method. We set ?? = 0.25 and, as in [10], consider ? values starting at ?1 ?? , where ?1 is the largest singular value of the matrix of observed entries in [Y; X] (with the unobserved entries set to 0), and decrease ? until 10?5 . The range of ? values considered was {10?3 , 10?2 , 10?1 , 1}. We initialized b0 to be all zero and Z0 to be the rank-1 approximation of the matrix of observed entries in [Y; X] (with unobserved entries set to 0) obtained by performing an SVD and reconstructing the matrix using only the largest singular value and corresponding left and right singular vectors. The step sizes were set as follows: Y| Y| ?Z = min( 3.8|? , |?X |), ?b = 3.8|? ? ?n . Convergence was defined as relative change in objective functions (2)(3) smaller than 10?5 . Baselines: We compare to the following baselines, each consisting of some missing feature imputation step on X first, then using a standard SVM to predict the labels: [FPC+SVM] Matrix completion on X alone using FPC [10]. [EM(k)+SVM] Expectation Maximization algorithm to impute missing X entries using a mixture of k Gaussian components. As in [9], missing features, mixing component parameters, and the assignments of items to components are treated as hidden variables, which are estimated in an iterative manner to maximize the likelihood of the data. [Mean+SVM] Impute each missing feature by the mean of the observed entries for that feature. [Zero+SVM] Impute missing features by filling in zeros. After imputation, an SVM is trained using the available (noisy) labels in ?Y for that task, and predictions are made for the rest of the labels. All SVMs are linear, trained using SVMlin2 , and the regularization parameter is tuned using 5-fold cross validation separately for each task. The range of parameter values considered was {10?8 , 10?7 , . . . , 107 , 108 }. Evaluation Method: To evaluate performance, we consider two measures: transductive label error, i.e., labels predicted incorrectly; and relative feature imputation error  P P the percentage of unobserved 2 2 ?ij ) / ij ?? ? is the predicted feature value. In the tables below, ij ?? / X (xij ? x / X xij , where x for each parameter setting, we report the mean performance (and standard deviation in parenthesis) of different algorithms over 10 random trials. The best algorithm within each parameter setting, as well as any statistically indistinguishable algorithms via a two-tailed paired t-test at significance level ? = 0.05, are marked in bold. 4.1 Synthetic Data Experiments Synthetic Data Generation: We generate a family of synthetic datasets to systematically explore the performance of the algorithms. We first create a rank-r matrix X0 = LR> , where L ? Rd?r and R ? Rn?r with entries drawn iid from N (0, 1). We then normalize X0 such that its entries have variance 1. Next, we create a weight matrix W ? Rt?d and bias vector b ? Rt , with all entries drawn iid from N (0, 10). We then produce X, Y0 , Y according to section 2.1. Finally, we produce the random ?X , ?Y masks with ? percent observed entries. Using the above procedure, we vary ? = 10%, 20%, 40%, n = 100, 400, r = 2, 4, and ?2 = 0.01, 0.1, while fixing t = 10, d = 20, to produce 24 different parameter settings. For each setting, we generate 10 trials, where the randomness is in the data and mask. Synthetic experiment results: Table 1 shows the transductive label errors, and Table 2 shows the relative feature imputation errors, on the synthetic datasets. We make several observations. Observation 1: MC-b and MC-1 are the best for feature imputation, as Table 2 shows. However, the imputations are not perfect, because in these particular parameter settings the ratio between the number of observed entries over the degrees of freedom needed to describe the feature matrix (i.e., r(d + n ? r)) is below the necessary condition for perfect matrix completion [2], and because there is some feature noise. Furthermore, our CV tuning procedure selects parameters ?, ? to optimize label error, which often leads to suboptimal imputation performance. In a separate experiment (not reported here) when we made the ratio sufficiently large and without noise, and specifically tuned for 2 http://vikas.sindhwani.org/svmlin.html 5 Table 1: Transductive label error of six algorithms on the 24 synthetic datasets. The varying parameters are feature noise ?2 , rank(X0 ) = r, number of items n, and observed label and feature percentage ?. Each row is for a unique parameter combination. Each cell shows the mean(standard deviation) of transductive label error (in percentage) over 10 random trials. The ?meta-average? row is the simple average over all parameter settings and all trials. ?2 0.01 0.1 r 2 n 100 ? 10% 20% 40% 400 10% 20% 40% 4 100 10% 20% 40% 400 10% 20% 40% 2 100 10% 20% 40% 400 10% 20% 40% 4 100 10% 20% 40% 400 10% 20% 40% meta-average MC-b MC-1 FPC+SVM EM1+SVM Mean+SVM Zero+SVM 37.8(4.0) 23.5(2.9) 15.1(3.1) 26.5(2.0) 15.9(2.5) 11.7(2.0) 42.5(4.0) 33.2(2.3) 19.6(3.1) 35.3(3.1) 24.4(2.3) 14.6(1.8) 39.6(5.5) 25.2(2.6) 15.7(3.1) 27.6(2.1) 18.0(2.2) 12.0(2.1) 42.5(4.3) 33.3(1.9) 21.4(2.7) 36.3(2.7) 25.5(2.0) 16.0(1.8) 25.6 31.8(4.3) 17.0(2.2) 10.8(1.8) 19.9(1.7) 11.7(1.9) 8.0(1.6) 40.8(4.4) 26.2(2.8) 14.3(2.7) 32.1(1.6) 19.1(1.3) 9.5(0.5) 34.6(3.5) 20.1(1.7) 12.6(1.4) 22.6(1.9) 15.2(1.7) 10.1(1.3) 41.5(2.5) 29.0(2.2) 18.4(3.1) 34.0(1.7) 21.8(1.0) 12.8(0.8) 21.4 34.8(7.0) 17.6(2.1) 9.6(1.5) 23.7(1.7) 12.6(2.2) 7.2(1.8) 41.5(2.6) 26.7(1.7) 13.6(2.6) 33.4(1.6) 20.5(1.4) 9.2(0.9) 37.3(6.4) 21.6(2.6) 13.2(2.0) 27.6(2.4) 16.8(2.3) 10.4(2.1) 42.3(2.0) 30.9(3.1) 18.7(2.4) 35.1(1.2) 23.8(1.5) 13.9(1.2) 22.6 34.6(3.9) 19.7(2.4) 10.4(1.0) 24.2(1.9) 12.0(1.9) 7.3(1.4) 43.2(2.2) 30.8(2.7) 14.1(2.4) 34.2(1.8) 19.8(1.1) 8.6(1.1) 40.2(5.3) 26.8(3.7) 15.1(2.4) 28.8(2.6) 18.4(2.5) 11.1(1.9) 45.6(1.9) 34.9(3.0) 21.6(2.4) 36.3(1.4) 25.1(1.4) 14.7(1.3) 24.1 40.5(5.7) 28.7(4.1) 16.5(2.5) 32.4(2.9) 20.0(1.9) 12.2(1.8) 43.5(2.9) 35.5(1.4) 22.5(2.0) 37.7(1.2) 26.9(1.5) 16.4(1.2) 41.5(6.0) 31.8(4.7) 18.5(2.7) 34.5(3.3) 22.6(2.4) 14.1(2.0) 44.6(2.9) 36.2(2.3) 23.9(2.0) 38.7(1.3) 28.4(1.7) 18.3(1.2) 28.6 40.5(5.1) 27.4(4.4) 15.4(2.3) 31.5(2.7) 19.7(1.7) 12.1(2.0) 42.9(2.9) 33.9(1.5) 21.7(2.3) 38.2(1.4) 26.9(1.3) 16.5(1.3) 41.0(5.7) 29.9(4.0) 17.2(2.4) 33.6(2.8) 21.8(2.5) 14.0(2.4) 43.6(2.3) 35.4(1.6) 23.3(2.5) 39.1(1.2) 28.4(1.8) 18.2(1.2) 28.0 imputation error, both MC-b and MC-1 did achieve perfect feature imputation. Also, FPC+SVM is slightly worse in feature imputation. This may seem curious as FPC focuses exclusively on imputing X. We believe the fact that MC-b and MC-1 can use information in Y to enhance feature imputation in X made them better than FPC+SVM. Observation 2: MC-1 is the best for multi-label transductive classification, as suggested by Table 1. Surprisingly, the feature imputation advantage of MC-b did not translate into classification, and FPC+SVM took second place. Observation 3: The same factors that affect standard matrix completion also affect classification performance of MC-b and MC-1. As the tables show, everything else being equal, less feature noise (smaller ?2 ), lower rank r, more items, or more observed features and labels, reduce label error. Beneficial combination of these factors (the 6th row) produces the lowest label errors. Matrix completion benefits from more tasks. We performed one additional synthetic data experiment examining the effect of t (the number of tasks) on MC-b and MC-1, with the remaining data parameters fixed at ? = 10%, n = 400, r = 2, d = 20, and ?2 = 0.01. Table 3 reveals that both MC methods achieve statistically significantly better label prediction and imputation performance with t = 10 than with only t = 2 (as determined by two-sample t-tests at significance level 0.05). 4.2 Music Emotions Data Experiments In this task introduced by Trohidis et al. [14], the goal is to predict which of several types of emotion are present in a piece of music. The data3 consists of n = 593 songs of a variety of musical genres, each labeled with one or more of t = 6 emotions (i.e., amazed-surprised, happy-pleased, relaxingcalm, quiet-still, sad-lonely, and angry-fearful). Each song is represented by d = 72 features (8 rhythmic, 64 timbre-based) automatically extracted from a 30-second sound clip. 3 Available at http://mulan.sourceforge.net/datasets.html 6 Table 2: Relative feature imputation error on the synthetic datasets. The algorithm Zero+SVM is not shown because it by definition has relative feature imputation error 1. ?2 0.01 0.1 r 2 n 100 ? 10% 20% 40% 400 10% 20% 40% 4 100 10% 20% 40% 400 10% 20% 40% 2 100 10% 20% 40% 400 10% 20% 40% 4 100 10% 20% 40% 400 10% 20% 40% meta-average MC-b MC-1 FPC+SVM EM1+SVM Mean+SVM 0.84(0.04) 0.54(0.08) 0.29(0.06) 0.73(0.03) 0.43(0.04) 0.30(0.10) 0.99(0.04) 0.77(0.05) 0.42(0.07) 0.87(0.04) 0.69(0.07) 0.34(0.05) 0.92(0.05) 0.69(0.07) 0.51(0.05) 0.79(0.03) 0.64(0.06) 0.48(0.04) 1.01(0.04) 0.84(0.03) 0.59(0.03) 0.90(0.02) 0.75(0.04) 0.56(0.03) 0.66 0.87(0.06) 0.57(0.06) 0.27(0.06) 0.72(0.04) 0.46(0.05) 0.22(0.04) 0.96(0.03) 0.78(0.05) 0.40(0.03) 0.88(0.03) 0.67(0.04) 0.34(0.03) 0.93(0.04) 0.72(0.06) 0.52(0.05) 0.80(0.03) 0.64(0.06) 0.45(0.05) 0.97(0.03) 0.85(0.03) 0.61(0.04) 0.92(0.02) 0.77(0.02) 0.55(0.04) 0.66 0.88(0.06) 0.57(0.07) 0.27(0.06) 0.76(0.03) 0.50(0.04) 0.24(0.05) 0.96(0.03) 0.77(0.04) 0.42(0.04) 0.89(0.01) 0.69(0.03) 0.38(0.03) 0.93(0.05) 0.74(0.06) 0.53(0.05) 0.84(0.03) 0.67(0.04) 0.49(0.05) 0.97(0.03) 0.85(0.03) 0.63(0.04) 0.92(0.01) 0.79(0.03) 0.59(0.04) 0.68 1.01(0.12) 0.67(0.13) 0.34(0.03) 0.79(0.07) 0.45(0.04) 0.21(0.04) 1.22(0.11) 0.92(0.07) 0.49(0.04) 1.00(0.08) 0.66(0.03) 0.29(0.02) 1.18(0.10) 0.94(0.07) 0.67(0.08) 0.96(0.07) 0.73(0.07) 0.57(0.07) 1.25(0.05) 1.07(0.06) 0.80(0.09) 1.08(0.07) 0.86(0.05) 0.66(0.06) 0.78 1.06(0.02) 1.03(0.02) 1.01(0.01) 1.02(0.01) 1.01(0.00) 1.00(0.00) 1.05(0.01) 1.02(0.01) 1.01(0.01) 1.01(0.00) 1.01(0.00) 1.00(0.00) 1.06(0.02) 1.03(0.02) 1.02(0.01) 1.02(0.01) 1.01(0.00) 1.00(0.00) 1.05(0.02) 1.02(0.01) 1.01(0.01) 1.01(0.01) 1.01(0.00) 1.00(0.00) 1.02 Table 3: More tasks help matrix completion (? = 10%, n = 400, r = 2, d = 20, ?2 = 0.01). t 2 10 MC-b MC-1 FPC+SVM MC-b 30.1(2.8) 22.9(2.2) 20.5(2.5) 26.5(2.0) 19.9(1.7) 23.7(1.7) transductive label error MC-1 FPC+SVM 0.78(0.07) 0.78(0.04) 0.76(0.03) 0.73(0.03) 0.72(0.04) 0.76(0.03) relative feature imputation error Table 4: Performance on the music emotions data. ? =40% 60% 80% 28.0(1.2) 25.2(1.0) 22.2(1.6) 27.4(0.8) 23.7(1.6) 19.8(2.4) 26.9(0.7) 25.2(1.6) 24.4(2.0) 26.0(1.1) 23.6(1.1) 21.2(2.3) 26.2(0.9) 23.1(1.2) 21.6(1.6) 26.3(0.8) 24.2(1.0) 22.6(1.3) 30.3(0.6) 28.9(1.1) 25.7(1.4) transductive label error Algorithm MC-b MC-1 FPC+SVM EM1+SVM EM4+SVM Mean+SVM Zero+SVM ? =40% 60% 80% 0.69(0.05) 0.54(0.10) 0.41(0.02) 0.60(0.05) 0.46(0.12) 0.25(0.03) 0.64(0.01) 0.46(0.02) 0.31(0.03) 0.46(0.09) 0.23(0.04) 0.13(0.01) 0.49(0.10) 0.27(0.04) 0.15(0.02) 0.18(0.00) 0.19(0.00) 0.20(0.01) 1.00(0.00) 1.00(0.00) 1.00(0.00) relative feature imputation error We vary the percentage of observed entries ? = 40%, 60%, 80%. For each ?, we run 10 random trials with different masks ?X , ?Y . For this dataset, we tuned only ? with CV, and set ? = 1. The results are in Table 4. Most importantly, these results show that MC-1 is useful for this realworld multi-label classification problem, leading to the best (or statistically indistinguishable from the best) transductive error performance with 60% and 80% of the data available, and close to the best with only 40%. We also compared these algorithms against an ?oracle baseline? (not shown in the table). In this baseline, we give 100% features (i.e., no indices are missing from ?X ) and the training labels in ?Y to a standard SVM, and let it predict the unspecified labels. On the same random trials, for observed percentage ? = 40%, 60%, 80%, the oracle baseline achieved label error rate 22.1(0.8), 21.3(0.8), 20.5(1.8) respectively. Interestingly, MC-1 with ? = 80% (19.8) is statistically indistinguishable from the oracle baseline. 7 4.3 Yeast Microarray Data Experiments This dataset comes from a biological domain and involves the problem of Yeast gene functional classification. We use the data studied by Elisseeff and Weston [5], which contains n = 2417 examples (Yeast genes) with d = 103 input features (results from microarray experiments).4 We follow the approach of [5] and predict each gene?s membership in t = 14 functional classes. For this larger dataset, we omitted the computationally expensive EM4+SVM methods, and tuned only ? for matrix completion while fixing ? = 1. Table 5 reveals that MC-b leads to statistically significantly lower transductive label error for this biological dataset. Although not highlighted in the table, MC-1 is also statistically better than the SVM methods in label error. In terms of feature imputation performance, the MC methods are weaker than FPC+SVM. However, it seems simultaneously predicting the missing labels and features appears to provide a large advantage to the MC methods. It should be pointed out that all algorithms except Zero+SVM in fact have small but non-zero standard deviation on imputation error, despite what the fixed-point formatting in the table suggests. For instance, with ? = 40%, the standard deviation is 0.0009 for MC-1, 0.0011 for FPC+SVM, and 0.0001 for Mean+SVM. Again, we compared these algorithms to an oracle SVM baseline with 100% observed entries in ?X . The oracle SVM approach achieves label error of 20.9(0.1), 20.4(0.2), and 20.1(0.3) for ? =40%, 60%, and 80% observed labels, respectively. Both MC-b and MC-1 significantly outperform this oracle under paired t-tests at significance level 0.05. We attribute this advantage to a combination of multi-label learning and transduction that is intrinsic to our matrix completion methods. Table 5: Performance on the yeast data. ? =40% 60% 80% 16.1(0.3) 12.2(0.3) 8.7(0.4) 16.7(0.3) 13.0(0.2) 8.5(0.4) 21.5(0.3) 20.8(0.3) 20.3(0.3) 22.0(0.2) 21.2(0.2) 20.4(0.2) 21.7(0.2) 21.1(0.2) 20.5(0.4) 21.6(0.2) 21.1(0.2) 20.5(0.4) transductive label error 5 Algorithm MC-b MC-1 FPC+SVM EM1+SVM Mean+SVM Zero+SVM ? =40% 60% 80% 0.83(0.02) 0.76(0.00) 0.73(0.02) 0.86(0.00) 0.92(0.00) 0.74(0.00) 0.81(0.00) 0.76(0.00) 0.72(0.00) 1.15(0.02) 1.04(0.02) 0.77(0.01) 1.00(0.00) 1.00(0.00) 1.00(0.00) 1.00(0.00) 1.00(0.00) 1.00(0.00) relative feature imputation error Discussions and Future Work We have introduced two matrix completion methods for multi-label transductive learning with missing features, which outperformed several baselines. In terms of problem formulation, our methods differ considerably from sparse multi-task learning [11, 1, 13] in that we regularize the feature and label matrix directly, without ever learning explicit weight vectors. Our methods also differ from multi-label prediction via reduction to binary classification or ranking [15], and via compressed sensing [7], which assumes sparsity in that each item has a small number of positive labels, rather than the low-rank nature of feature matrices. These methods do not naturally allow for missing features. Yet other multi-label methods identify a subspace of highly predictive features across tasks in a first stage, and learn in this subspace in a second stage [8, 12]. Our methods do not require separate stages. Learning in the presence of missing data typically involves imputation followed by learning with completed data [9]. Our methods perform imputation plus learning in one step, similar to EM on missing labels and features [6], but the underlying model assumption is quite different. A drawback of our methods is their restriction to linear classifiers only. One future extension is to explicitly map the partial feature matrix to a partially observed polynomial (or other) kernel Gram matrix, and apply our methods there. Though such mapping proliferates the missing entries, we hope that the low-rank structure in the kernel matrix will allow us to recover labels that are nonlinear functions of the original features. Acknowledgements: This work is supported in part by NSF IIS-0916038, NSF IIS-0953219, AFOSR FA955009-1-0313, and AFOSR A9550-09-1-0423. We also wish to thank Brian Eriksson for useful discussions and source code implementing EM-based imputation. 4 Available at http://mulan.sourceforge.net/datasets.html 8 References [1] Andreas Argyriou, Charles A. Micchelli, and Massimiliano Pontil. On spectral learning. Journal of Machine Learning Research, 11:935?953, 2010. [2] Emmanuel J. Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9:717?772, 2009. [3] Emmanuel J. Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56:2053?2080, 2010. [4] Olivier Chapelle, Alexander Zien, and Bernhard Sch?olkopf, editors. Semi-supervised learning. MIT Press, 2006. [5] Andr?e Elisseeff and Jason Weston. A kernel method for multi-labelled classification. In Thomas G. Dietterich, Suzanna Becker, and Zoubin Ghahramani, editors, NIPS, pages 681? 687. MIT Press, 2001. [6] Zoubin Ghahramani and Michael I. Jordan. Supervised learning from incomplete data via an EM approach. In Advances in Neural Information Processing Systems 6, pages 120?127. Morgan Kaufmann, 1994. [7] Daniel Hsu, Sham Kakade, John Langford, and Tong Zhang. Multi-label prediction via compressed sensing. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 772?780. 2009. [8] Shuiwang Ji, Lei Tang, Shipeng Yu, and Jieping Ye. Extracting shared subspace for multi-label classification. In KDD ?08: Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 381?389, New York, NY, USA, 2008. ACM. [9] Roderick J. A. Little and Donald B. Rubin. Statistical Analysis with Missing Data. WileyInterscience, 2nd edition, September 2002. [10] Shiqian Ma, Donald Goldfarb, and Lifeng Chen. Fixed point and Bregman iterative methods for matrix rank minimization. Mathematical Programming Series A, to appear (published online September 23, 2009). [11] Guillaume Obozinski, Ben Taskar, and Michael I. Jordan. Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing, 20(2):231? 252, 2010. [12] Piyush Rai and Hal Daume. Multi-label prediction via sparse infinite CCA. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1518?1526. 2009. [13] Nathan Srebro and Adi Shraibman. Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory, pages 545?560. Springer-Verlag, 2005. [14] K. Trohidis, G. Tsoumakas, G. Kalliris, and I. Vlahavas. Multilabel classification of music into emotions. In Proc. 9th International Conference on Music Information Retrieval (ISMIR 2008), Philadelphia, PA, USA, 2008, 2008. [15] G. Tsoumakas, I. Katakis, and I. Vlahavas. Mining multi-label data. In Data Mining and Knowledge Discovery Handbook. Springer, 2nd edition, 2010. [16] Xiaojin Zhu and Andrew B. Goldberg. Introduction to Semi-Supervised Learning. Morgan & Claypool, 2009. 9
3932 |@word trial:6 polynomial:1 seems:2 norm:11 nd:3 decomposition:1 elisseeff:2 kz1:2 reduction:1 initial:3 contains:1 exclusively:1 zij:10 series:1 daniel:1 tuned:4 interestingly:1 outperforms:1 current:1 recovered:2 z2:4 yet:1 zhu1:1 john:1 kdd:1 n0:1 alone:1 generative:3 item:20 beginning:1 lr:1 org:1 zhang:1 five:2 mathematical:1 surprised:1 welldefined:1 consists:2 prove:1 manner:1 introduce:1 x0:22 mask:4 indeed:1 behavior:1 cand:2 sdp:3 multi:15 ming:1 globally:1 decreasing:1 automatically:1 actual:2 little:1 solver:2 project:2 mulan:2 underlying:3 notation:1 katakis:1 lowest:1 what:2 argmin:3 unspecified:2 shraibman:1 unobserved:7 impractical:1 guarantee:2 nutshell:1 fpc:21 classifier:3 k2:2 fpca:1 yn:2 appear:1 positive:2 engineering:1 tends:1 severely:1 despite:2 ak:5 plus:1 bird:1 studied:1 suggests:1 limited:1 range:3 bi:5 statistically:6 unique:1 yj:1 practice:1 differs:1 procedure:3 pontil:1 empirical:1 significantly:3 projection:2 pre:2 donald:2 zoubin:2 eriksson:1 unlabeled:2 close:1 operator:2 noising:1 selection:2 kb1:2 optimize:3 restriction:1 map:2 missing:24 jieping:1 williams:2 starting:1 convex:8 simplicity:1 suzanna:1 importantly:1 nuclear:7 fill:1 regularize:1 handle:3 analogous:1 yj0:2 exact:2 programming:2 olivier:1 goldberg:2 pa:1 element:2 satisfying:1 expensive:1 labeled:1 observed:24 taskar:1 electrical:1 solved:1 cy:4 region:3 fa955009:1 culotta:2 trade:1 decrease:1 benjamin:2 roderick:1 multilabel:1 trained:2 solving:1 predictive:1 upon:1 completely:1 joint:2 em1:4 represented:1 genre:1 stacked:4 separated:1 massimiliano:1 describe:3 whose:3 quite:2 posed:1 valued:1 larger:2 solve:3 relax:2 otherwise:1 compressed:2 ability:1 statistic:1 transductive:16 jointly:2 noisy:1 highlighted:1 final:1 online:1 advantage:4 sequence:2 net:2 took:1 product:1 remainder:1 mixing:1 translate:1 achieve:2 normalize:1 olkopf:1 sourceforge:2 convergence:6 cluster:1 optimum:2 produce:4 perfect:3 converges:2 ben:1 help:2 piyush:1 andrew:2 completion:19 fixing:2 pose:1 ij:5 minor:1 b0:3 c:1 skip:1 involves:3 predicted:2 come:1 differ:3 closely:1 correct:1 attribute:1 drawback:1 tsoumakas:2 everything:1 implementing:1 require:1 preliminary:1 homotopy:1 opt:1 biological:2 brian:1 yij:14 extension:1 sufficiently:1 considered:2 exp:4 claypool:1 mapping:1 predict:6 vary:2 achieves:1 omitted:1 proc:1 outperformed:1 label:73 expose:1 largest:2 create:2 minimization:4 hope:2 mit:2 gaussian:2 always:1 modified:4 rather:1 shrinkage:6 varying:2 focus:1 consistently:1 rank:28 likelihood:1 mainly:1 expansive:1 sigkdd:1 baseline:10 sense:1 membership:1 typically:1 hidden:1 interested:1 selects:1 provably:1 tao:1 issue:1 classification:13 ill:1 html:3 constrained:1 emotion:6 once:3 equal:1 yu:1 k2f:2 filling:1 future:2 others:1 report:1 randomly:2 simultaneously:2 consisting:1 freedom:1 highly:1 mining:3 evaluation:1 certainly:1 mixture:1 semidefinite:1 bregman:1 nowak:1 partial:1 necessary:1 incomplete:1 divide:1 initialized:1 instance:1 column:1 soft:3 svmlin:1 assignment:1 maximization:1 introducing:1 deviation:4 jerryzhu:1 entry:21 subset:1 examining:1 too:2 reported:1 fearful:1 synthetic:11 combined:1 considerably:1 recht:1 density:1 international:2 off:1 terence:1 enhance:1 michael:2 squared:1 again:1 satisfied:1 choose:2 shiqian:1 worse:1 leading:2 pmin:1 suggesting:1 account:1 converted:1 de:1 b2:4 bold:1 satisfy:1 xu1:1 explicitly:4 ranking:1 piece:1 performed:1 jason:1 start:2 recover:7 denoised:1 minimize:1 variance:1 musical:1 kaufmann:1 correspond:2 identify:1 produced:2 iid:3 mc:56 published:1 randomness:1 converged:2 definition:1 against:2 nonetheless:1 naturally:1 associated:2 couple:1 hsu:1 dataset:4 knowledge:2 back:2 appears:1 supervised:4 follow:2 formulation:13 evaluated:1 though:2 furthermore:1 stage:3 until:1 langford:1 nonlinear:1 logistic:1 yeast:5 believe:1 lei:1 hal:1 name:2 effect:1 dietterich:1 contain:2 true:1 ye:1 usa:2 hence:3 equality:1 regularization:1 alternating:1 satisfactory:1 goldfarb:2 round:1 indistinguishable:3 impute:3 inferior:1 illustrative:1 criterion:1 trohidis:2 stone:1 complete:2 percent:1 consideration:1 novel:1 charles:1 sigmoid:1 imputing:3 functional:2 empirically:1 ji:1 foreach:2 discussed:2 elementwise:1 refer:1 cv:4 rd:2 uv:1 unconstrained:1 tuning:3 similarly:2 pointed:1 mathematics:1 chapelle:1 add:1 recent:2 forcing:1 verlag:1 meta:3 binary:2 outperforming:1 morgan:2 additional:2 determine:2 converge:1 maximize:1 ii:3 semi:3 multiple:2 sound:1 zien:1 reduces:2 sham:1 cross:2 long:1 retrieval:1 y1j:1 paired:2 parenthesis:1 prediction:6 basic:1 heterogeneous:2 expectation:1 iteration:2 sometimes:1 kernel:3 achieved:1 cell:1 addition:2 separately:2 else:1 singular:6 source:1 microarray:3 sch:1 extra:1 rest:1 lafferty:2 seem:2 jordan:2 extracting:1 curious:1 near:1 presence:1 intermediate:1 iii:1 bengio:2 variety:1 affect:2 brecht:1 suboptimal:1 reduce:1 andreas:1 shift:3 six:1 handled:1 becker:1 song:2 york:1 useful:2 listed:1 tune:1 clip:1 svms:1 continuation:8 generate:2 outperform:3 xij:8 percentage:7 http:3 nsf:2 notice:1 andr:1 sign:4 estimated:1 disjoint:1 ytj:1 discrete:1 data3:1 key:1 four:1 drawn:2 imputation:23 wisc:2 y10:1 relaxation:1 run:3 realworld:1 place:1 family:2 ismir:1 sad:1 bit:1 cca:1 guaranteed:2 angry:1 followed:1 fold:3 oracle:6 annual:1 constraint:3 aspect:1 speed:3 argument:1 min:3 nathan:1 performing:1 department:1 according:1 rai:1 combination:4 across:2 smaller:2 reconstructing:1 y0:14 em:4 wi:1 slightly:1 beneficial:1 formatting:1 making:1 modification:2 kakade:1 computationally:2 needed:1 end:5 available:4 apply:2 spectral:1 vlahavas:2 alternative:2 encounter:1 vikas:1 original:1 thomas:1 assumes:2 remaining:2 completed:3 madison:2 music:6 restrictive:2 emmanuel:2 build:1 wx0:1 lonely:1 ghahramani:2 micchelli:1 objective:1 primary:1 rt:5 september:2 gradient:6 quiet:1 subspace:4 unable:1 separate:2 thank:1 manifold:2 assuming:1 code:1 index:7 relationship:1 modeled:1 ratio:2 minimizing:1 happy:1 difficult:2 truncates:1 robert:1 potentially:1 subproblems:1 trace:1 negative:2 append:1 perform:3 allowing:1 observation:4 datasets:8 incorrectly:1 excluding:1 ever:1 y1:2 rn:1 stack:1 recoverability:1 bk:4 introduced:2 specified:1 z1:2 nip:1 able:1 suggested:1 below:3 shuiwang:1 regime:1 sparsity:2 max:6 including:1 power:1 treated:1 regularized:1 predicting:1 zhu:1 improve:1 jun:1 xiaojin:2 philadelphia:1 pleased:1 acknowledgement:1 discovery:2 relative:8 wisconsin:1 afosr:2 loss:8 mixed:1 generation:3 srebro:1 validation:2 foundation:1 degree:1 rubin:1 editor:4 story:3 systematically:2 row:11 course:2 surprisingly:1 last:2 supported:1 side:1 bias:12 expansiveness:2 weaker:1 allow:2 rhythmic:1 sparse:2 benefit:1 kzk:4 xn:2 world:3 gram:1 commonly:1 made:3 transaction:1 approximate:2 implicitly:3 bernhard:1 gene:3 global:2 reveals:3 handbook:1 b1:2 un:1 iterative:3 tailed:1 table:17 promising:1 learn:2 zk:5 nature:1 schuurmans:2 adi:1 shipeng:1 meanwhile:1 domain:1 did:2 significance:3 main:2 noise:9 edition:2 daume:1 x1:2 augmented:1 transduction:3 slow:1 tong:1 ny:1 explicit:1 obeying:1 wish:1 lie:1 third:1 tang:1 down:2 z0:5 covariate:1 sensing:2 decay:1 svm:37 timbre:1 intrinsic:1 adding:1 effectively:1 prevail:1 overloading:1 chen:2 cx:4 simply:2 explore:2 partially:2 sindhwani:1 springer:2 corresponds:1 extracted:1 ma:2 acm:2 weston:2 obozinski:1 goal:3 marked:1 labelled:1 shared:1 feasible:2 hard:1 change:2 specifically:3 determined:2 except:2 infinite:1 ece:1 experimental:1 svd:6 e:2 select:1 guillaume:1 meant:1 alexander:1 evaluate:1 argyriou:1 wileyinterscience:1
3,238
3,933
Structured sparsity-inducing norms through submodular functions Francis Bach INRIA - Willow project-team Laboratoire d?Informatique de l?Ecole Normale Sup?erieure Paris, France [email protected] Abstract Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the ?1 -norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lov?asz extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning. 1 Introduction The concept of parsimony is central in many scientific domains. In the context of statistics, signal processing or machine learning, it takes the form of variable or feature selection problems, and is commonly used in two situations: First, to make the model or the prediction more interpretable or cheaper to use, i.e., even if the underlying problem does not admit sparse solutions, one looks for the best sparse approximation. Second, sparsity can also be used given prior knowledge that the model should be sparse. In these two situations, reducing parsimony to finding models with low cardinality turns out to be limiting, and structured parsimony has emerged as a fruitful practical extension, with applications to image processing, text processing or bioinformatics (see, e.g., [1, 2, 3, 4, 5, 6, 7] and Section 4). For example, in [4], structured sparsity is used to encode prior knowledge regarding network relationship between genes, while in [6], it is used as an alternative to structured nonparametric Bayesian process based priors for topic models. Most of the work based on convex optimization and the design of dedicated sparsity-inducing norms has focused mainly on the specific allowed set of sparsity patterns [1, 2, 4, 6]: if w ? Rp denotes the predictor we aim to estimate, and Supp(w) denotes its support, then these norms are designed so that penalizing with these norms only leads to supports from a given family of allowed patterns. In this paper, we instead follow the approach of [8, 3] and consider specific penalty functions F (Supp(w)) of the support set, which go beyond the cardinality function, but are not limited or designed to only forbid certain sparsity patterns. As shown in Section 6.2, these may also lead to restricted sets of supports but their interpretation in terms of an explicit penalty on the support leads to additional 1 insights into the behavior of structured sparsity-inducing norms (see, e.g., Section 4.1). While direct greedy approaches (i.e., forward selection) to the problem are considered in [8, 3], we provide convex relaxations to the function w 7? F (Supp(w)), which extend the traditional link between the ?1 -norm and the cardinality function. This is done for a particular ensemble of set-functions F , namely nondecreasing submodular functions. Submodular functions may be seen as the set-function equivalent of convex functions, and exhibit many interesting properties that we review in Section 2?see [9] for a tutorial on submodular analysis and [10, 11] for other applications to machine learning. This paper makes the following contributions: ? We make explicit links between submodularity and sparsity by showing that the convex envelope of the function w 7? F (Supp(w)) on the ?? -ball may be readily obtained from the Lov?asz extension of the submodular function (Section 3). ? We provide generic algorithmic tools, i.e., subgradients and proximal operators (Section 5), as well as theoretical guarantees, i.e., conditions for support recovery or high-dimensional inference (Section 6), that extend classical results for the ?1 -norm and show that many norms may be tackled by the exact same analysis and algorithms. ? By selecting specific submodular functions in Section 4, we recover and give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups [1, 2, 7], and we define new norms, in particular ones that can be used as nonfactorial priors for supervised learning (Section 4). These are illustrated on simulation experiments in Section 7, where they outperform related greedy approaches [3]. Notation. For w ? Rp , Supp(w) ? V = {1, . . . , p} denotes the support of w, defined as Supp(w) = {j ? V, wj 6= 0}. For w ? Rp and q ? [1, ?], we denote by kwkq the ?q -norm of w. We denote by |w| ? Rp the vector of absolute values of the components of w. Moreover, given a vector w and a matrix Q, wA and QAA arePthe corresponding subvector and submatrix of w and Q. Finally, for w ? Rp and A ? V , w(A) = k?A wk (this defines a modular set-function). 2 Review of submodular function theory Throughout this paper, we consider a nondecreasing submodular function F defined on the power set 2V of V = {1, . . . , p}, i.e., such that: ?A, B ? V, ?A, B ? V, F (A) + F (B) > F (A ? B) + F (A ? B), (submodularity) A ? B ? F (A) 6 F (B). (monotonicity) Moreover, we assume that F (?) = 0. These set-functions are often referred to as polymatroid set-functions [12, 13]. Also, without loss of generality, we may assume that F is strictly positive on singletons, i.e., for all k ? V , F ({k}) > 0. Indeed, if F ({k}) = 0, then by submodularity and monotonicity, if A ? k, F (A) = F (A\{k}) and thus we can simply consider V \{k} instead of V . Classical examples are the cardinality function (which will lead to the ?1 -norm) and, given a partition of V into B1 ? ? ? ? ? Bk = V , the set-function A 7? F (A) which is equal to the number of groups B1 , . . . , Bk with non empty intersection with A (which will lead to the grouped ?1 /?? -norm [1, 14]). Lov?asz extension. Given any set-function F , one can define its Lov?asz extensionf : Rp+ ? R, as follows; given w ? Rp+ , we can order the components of w in decreasing order wj1 > ? ? ? > wjp > 0, the value f (w) is then defined as: P f (w) = pk=1 wjk [F ({j1 , . . . , jk }) ? F ({j1 , . . . , jk?1 })]. (1) The Lov?asz extension f is always piecewise-linear, and when F is submodular, it is also convex (see, e.g., [12, 9]). Moreover, for all ? ? {0, 1}p, f (?) = F (Supp(?)): f is indeed an extension from vectors in {0, 1}p (which can be identified with indicator vectors of sets) to all vectors in Rp+ . Moreover, it turns out that minimizing F over subsets, i.e., minimizing f over {0, 1}p is equivalent to minimizing f over [0, 1]p [13]. Submodular polyhedron and greedy algorithm. We denote by P the submodular polyhedron [12], defined as the set of s ? Rp+ such that for all A ? V , s(A) P 6 F (A), i.e., P = {s ? Rp+ , ?A ? V, s(A) 6 F (A)}, where we use the notation s(A) = k?A sk . One 2 (0,1)/F({2}) (1,1)/F({1,2}) (1,0)/F({1}) Figure 1: Polyhedral unit ball, for 4 different submodular functions (two variables), with different stable inseparable sets leading to different sets of extreme points; changing values of F may make some of the extreme points disappear. From left to right: F (A) = |A|1/2 (all possible extreme points), F (A) = |A| (leading to the ?1 -norm), F (A) = min{|A|, 1} (leading to the ?? -norm), F (A) = 12 1{A?{2}6=?} + 1{A6=?} (leading to the structured norm ?(w) = 21 |w2 | + kwk? ). important result in submodular analysis is that if F is a nondecreasing submodular function, then we have a representation of f as a maximum of linear functions [12, 9], i.e., for all w ? Rp+ , f (w) = max w? s. (2) s?P Instead of solving a linear program with p + 2p contraints, a solution s may then be obtained by the following ?greedy algorithm?: order the components of w in decreasing order wj1 > ? ? ? > wjp , and then take for all k ? {1, . . . , p}, sjk = F ({j1 , . . . , jk }) ? F ({j1 , . . . , jk?1 }). Stable sets. A set A is said stable if it cannot be augmented without increasing F , i.e., if for all sets B ? A, B 6= A ? F (B) > F (A). If F is strictly increasing (such as for the cardinality), then all sets are stable. The set of stable sets is closed by intersection [13], and will correspond to the set of allowed sparsity patterns (see Section 6.2). Separable sets. A set A is separable if we can find a partition of A into A = B1 ?? ? ??Bk such that F (A) = F (B1 ) + ? ? ? + F (Bk ). A set A is inseparable if it is not separable. As shown in [13], the submodular polytope P has full dimension p as soon as F is strictly positive on all singletons, and its faces are exactly the sets {sk = 0} for k ? V and {s(A) = F (A)} for stable and inseparable sets A. We denote by T the set of such sets. This implies that P = {s ? Rp+ , ?A ? T , s(A) 6 F (A)}. These stable inseparable sets will play a role when describing extreme points of unit balls of our new norms (Section 3) and for deriving concentration inequalities in Section 6.3. For the cardinality function, stable and inseparable sets are singletons. 3 Definition and properties of structured norms We define the function ?(w) = f (|w|), where |w| is the vector in Rp composed of absolute values of w and f the Lov?asz extension of F . We have the following properties (see proof in [15]), which show that we indeed define a norm and that it is the desired convex envelope: Proposition 1 (Convex envelope, dual norm) Assume that the set-function F is submodular, nondecreasing, and strictly positive for all singletons. Define ? : w 7? f (|w|). Then: (i) ? is a norm on Rp , (ii) ? is the convex envelope of the function g : w 7? F (Supp(w)) on the unit ?? -ball, (iii) the dual norm (see, e.g., [16]) of ? is equal to ?? (s) = maxA?V ksA k1 F (A) = maxA?T ksA k1 F (A) . We provide examples of submodular set-functions and norms in Section 4, where we go from setfunctions to norms, and vice-versa. From the definition of the Lov?asz extension in Eq. (1), we see that ? is a polyhedral norm (i.e., its unit ball is a polyhedron). The following proposition gives the set of extreme points of the unit ball (see proof in [15] and examples in Figure 1): Proposition 2 (Extreme points of unit ball) The extreme points of the unit ball of ? are the vec1 s, with s ? {?1, 0, 1}p, Supp(s) = A and A a stable inseparable set. tors F (A) This proposition shows, that depending on the number and cardinality of the inseparable stable sets, we can go from 2p (only singletons) to 3p ? 1 extreme points (all possible sign vectors). We show in Figure 1 examples of balls for p = 2, as well as sets of extreme points. These extreme points will play a role in concentration inequalities derived in Section 6. 3 0 ?6 ?4 ?2 log(?) 0 0.5 0 ?6 ?4 ?2 log(?) 0 0.2 weights 0.5 1 weights 1 weights weights Figure 2: Sequence and groups: (left) groups for contiguous patterns, (right) groups for penalizing the number of jumps in the indicator vector sequence. 0.1 0 ?2 ?1 log(?) 1 0.5 0 ?6 ?4 ?2 log(?) 0 Figure 3: Regularization path for a penalized least-squares problem (black: variables that should be active, red: variables that should be left out). From left to right: ?1 -norm penalization (a wrong variable is included with the correct ones), polyhedral norm for rectangles in 2D, with zoom (all variables come in together), mix of the two norms (correct behavior). 4 Examples of nondecreasing submodular functions We consider three main types of submodular functions with potential applications to regularization for supervised learning. Some existing norms are shown to be examples of our frameworks (Section 4.1, Section 4.3), while other novel norms are designed from specific submodular functions (Section 4.2). Other examples of submodular functions, in particular in terms of matroids and entropies, may be found in [12, 10, 11] and could also lead to interesting new norms. Note that set covers, which are common examples of submodular functions are subcases of set-functions defined in Section 4.1 (see, e.g., [9]). 4.1 Norms defined with non-overlapping or overlapping groups We P consider grouped norms defined with potentially overlapping groups [1, 2], i.e., ?(w) = G?V d(G)kwG k? where d is a nonnegative set-function (with potentially d(G) = 0 when G should not be considered in the norm). It is a norm asP soon as ?G,d(G)>0 G = V and it corresponds to the nondecreasing submodular function F (A) = G?A6=? d(G). In the case where ?? -norms are replaced by ?2 -norms, [2] has shown that the set of allowed sparsity patterns are intersections of complements of groups G with strictly positive weights. These sets happen to be the set of stable sets for the corresponding submodular function; thus the analysis provided in Section 6.2 extends the result of [2] to the new case of ?? -norms. However, in our situation, we can give a reinterpretation through a submodular function that counts the number of times the support A intersects groups G with non zero weights. This goes beyond restricting the set of allowed sparsity patterns to stable sets. We show later in this section some insights gained by this reinterpretation. We now give some examples of norms, with various topologies of groups. Hierarchical norms. Hierarchical norms defined on directed acyclic graphs [1, 5, 6] correspond to the set-function F (A) which is the cardinality of the union of ancestors of elements in A. These have been applied to bioinformatics [5], computer vision and topic models [6]. Norms defined on grids. If we assume that the p variables are organized in a 1D, 2D or 3D grid, [2] considers norms based on overlapping groups leading to stable sets equal to rectangular or convex shapes, with applications in computer vision [17]. For example, for the groups defined in the left side of Figure 2 (with unit weights), we have F (A) = p ? 2 + range(A) if A 6= ? and F (?) = 0 (the range of A is equal to max(A) ? min(A) + 1). From empty sets to non-empty sets, there is a gap of p ? 1, which is larger than differences among non-empty sets. This leads to the undesired result, which has been already observed by [2], of adding all variables in one step, rather than gradually, when the regularization parameter decreases in a regularized optimization problem. In order to counterbalance this effect, adding a constant times the cardinality function has the effect of making the first gap relatively smaller. This corresponds to adding a constant times the ?1 -norm and, as shown in Figure 3, solves the problem of having all variables coming together. All patterns are then allowed, but contiguous ones are encouraged rather than forced. 4 Another interesting new norm may be defined from the groups in the right side of Figure 2. Indeed, it corresponds to the function F (A) equal to |A| plus the number of intervals of A. Note that this also favors contiguous patterns but is not limited to selecting a single interval (like the norm obtained from groups in the left side of Figure 2). Note that it is to be contrasted with the total variation (a.k.a. fused Lasso penalty [18]), which is a relaxation of the number of jumps in a vector w rather than in its support. In 2D or 3D, this extends to the notion of perimeter and area, but we do not pursue such extensions here. 4.2 Spectral functions of submatrices Given a positive semidefinite matrix Q ? Rp?p and a real-valued function h from R+ ? R, one may Pp define tr[h(Q)] as i=1 h(?i ) where ?1 , . . . , ?p are the (nonnegative) eigenvalues of Q [19]. We can thus define the set-function F (A) = tr h(QAA ) for A ? V . The functions h(?) = log(?+t) for t > 0 lead to submodular functions, as they correspond to Rentropies of Gaussian random variables ? (see, e.g., [12, 9]). Thus, since for q ? (0, 1), ?q = q sin? q? 0 log(1 + ?/t)tq?1 dt (see, e.g., [20]), h(?) = ?q for q ? (0, 1] are positive linear combinations of functions that lead to nondecreasing submodular functions. Thus, they are also nondecreasing submodular functions, and, to the best of our knowledge, provide novel examples of such functions. In the context of supervised learning fromPa design matrix X ? Rn?p , we naturally use Q = X ? X. ? XA = k?A Xk? Xk (where XA denotes the submatrix of X with If h is linear, then F (A) = tr XA columns in A) and we obtain a weighted cardinality function and hence and a weighted ?1 -norm, which is a factorial prior, i.e., it is a sum of terms depending on each variable independently. In a frequentist setting, the Mallows CL penalty [21] depends on the degrees of freedom, of the ? ? form tr XA XA (XA XA + ?I)?1 . This is a non-factorial prior but unfortunately it does not lead to a submodular function. In a Bayesian context however, it is shown by [22] that penalties of the form ? log det(XA XA + ?I) (which lead to submodular functions) correspond to marginal likelihoods associated to the set A and have good behavior when used within a non-convex framework. This highlights the need for non-factorial priors which are sub-linear functions of the eigenvalues of ? XA XA , which is exactly what nondecreasing submodular function of submatrices are. We do not pursue the extensive evaluation of non-factorial convex priors in this paper but provide in simulations ? examples with F (A) = tr(XA XA )1/2 (which is equal to the trace norm of XA [16]). 4.3 Functions of cardinality For F (A) = h(|A|) where h is nondecreasing, such that h(0) = 0 and concave, then, from Eq. (1), ?(w) is defined from the rank statistics of |w| ? Rp+ , i.e., if |w(1) | > |w(2) | > ? ? ? > |w(p) |, Pp then ?(w) = k=1 [h(k) ? h(k ? 1)]|w(k) |. This includes the sum of the q largest elements, and might lead to interesting new norms for unstructured variable selection but this is not pursued here. However, the algorithms and analysis presented in Section 5 and Section 6 apply to this case. 5 Convex analysis and optimization In this section we provide algorithmic tools related to optimization problems based on the regularization by our novel sparsity-inducing norms. Note that since these norms are polyhedral norms with unit balls having potentially an exponential number of vertices or faces, regular linear programming toolboxes may not be used. Subgradient. From ?(w) = maxs?P s? |w| and the greedy algorithm1 presented in Section 2, one can easily get in polynomial time one subgradient as one of the maximizers s. This allows to use subgradient descent, with, as shown in Figure 4, slow convergence compared to proximal methods. Proximal operator. Given regularized problems of the form minw?Rp L(w) + ??(w), where L is differentiable with Lipschitz-continuous gradient, proximal methods have been shown to be particularly efficient first-order methods (see, e.g., [23]). In this paper, we consider the methods ?ISTA? and its accelerated variants ?FISTA? [23], which are compared in Figure 4. 1 The greedy algorithm to find extreme points of the submodular polyhedron should not be confused with the greedy algorithm (e.g., forward selection) that we consider in Section 7. 5 To apply these methods, it suffices to be able to solve efficiently problems of the form: minw?Rp 12 kw ? zk22 + ??(w). In the case of the ?1 -norm, this reduces to soft thresholding of z, the following proposition (see proof in [15]) shows that this is equivalent to a particular algorithm for submodular function minimization, namely the minimum-norm-point algorithm, which has no complexity bound but is empirically faster than algorithms with such bounds [12]: Proposition 3 (Proximal operator) Let z ? Rp and ? > 0, minimizing 12 kw ? zk22 + ??(w) is equivalent to finding the minimum of the submodular function A 7? ?F (A) ? |z|(A) with the minimum-norm-point algorithm. In [15], it is shown how a solution for one problem may be obtained from a solution to the other problem. Moreover, any algorithm for minimizing submodular functions allows to get directly the support of the unique solution of the proximal problem and that with a sequence of submodular function minimizations, the full solution may also be obtained. Similar links between convex optimization and minimization of submodular functions have been considered (see, e.g., [24]). However, these are dedicated to symmetric submodular functions (such as the ones obtained from graph cuts) and are thus not directly applicable to our situation of non-increasing submodular functions. Finally, note that using the minimum-norm-point algorithm leads to a generic algorithm that can be applied to any submodular functions F , and that it may be rather inefficient for simpler subcases (e.g., the ?1 /?? -norm, tree-structured groups [6], or general overlapping groups [7]). 6 Sparsity-inducing properties In this section, we consider a fixed design matrix X ? Rn?p and y ? Rn a vector of random responses. Given ? > 0, we define w ? as a minimizer of the regularized least-squares cost: minw?Rp 1 2n ky ? Xwk22 + ??(w). (3) We study the sparsity-inducing properties of solutions of Eq. (3), i.e., we determine in Section 6.2 which patterns are allowed and in Section 6.3 which sufficient conditions lead to correct estimation. Like recent analysis of sparsity-inducing norms [25], the analysis provided in this section relies heavily on decomposability properties of our norm ?. 6.1 Decomposability For a subset J of V , we denote by FJ : 2J ? R the restriction of F to J, defined for A ? J c by FJ (A) = F (A), and by F J : 2J ? R the contraction of F by J, defined for A ? J c by F J (A) = F (A ? J) ? F (A). These two functions are submodular and nondecreasing as soon as F is (see, e.g., [12]). We denote by ?J the norm on RJ defined through the submodular function FJ , and ?J the pseudoc norm defined on RJ defined through F J (as shown in Proposition 4, it is a norm only when J is a stable set). Note that ?J c (a norm on J c ) is in general different from ?J . Moreover, ?J (wJ ) is actually equal to ?(w) ? where w ?J = wJ and w ?J c = 0, i.e., it is the restriction of ? to J. We can now prove the following decomposition properties, which show that under certain circumstances, we can decompose the norm ? on subsets J and their complements: Proposition 4 (Decomposition) Given J ? V and ?J and ?J defined as above, we have: (i) ?w ? Rp , ?(w) > ?J (wJ ) + ?J (wJ c ), (ii) ?w ? Rp , if minj?J |wj | > maxj?J c |wj | , then ?(w) = ?J (wJ ) + ?J (wJ c ), c (iii) ?J is a norm on RJ if and only if J is a stable set. 6.2 Sparsity patterns In this section, we do not make any assumptions regarding the correct specification of the linear model. We show that with probability one, only stable support sets may be obtained (see proof in [15]). For simplicity, we assume invertibility of X ? X, which forbids the high-dimensional situation p > n we consider in Section 6.3, but we could consider assumptions similar to the ones used in [2]. 6 Proposition 5 (Stable sparsity patterns) Assume y ? Rn has an absolutely continuous density with respect to the Lebesgue measure and that X ? X is invertible. Then the minimizer w ? of Eq. (3) is unique and, with probability one, its support Supp(w) ? is a stable set. 6.3 High-dimensional inference We now assume that the linear model is well-specified and extend results from [26] for sufficient support recovery conditions and from [25] for estimation consistency. As seen in Proposition 4, the norm ? is decomposable and we use this property extensively in this section. We denote by (J) ?(J) = minB?J c F (B?J)?F ; by submodularity and monotonicity of F , ?(J) is always between F (B) zero and one, and, as soon as J is stable it is strictly positive (for the ?1 -norm, ?(J) = 1). Moreover, we denote by c(J) = supw?Rp ?J (wJ )/kwJ k2 , the equivalence constant between the norm ?J and the ?2 -norm. We always have c(J) 6 |J|1/2 maxk?V F ({k}) (with equality for the ?1 -norm). The following propositions allow us to get back and extend well-known results for the ?1 -norm, i.e., Propositions 6 and 8 extend results based on support recovery conditions [26]; while Propositions 7 and 8 extend results based on restricted eigenvalue conditions (see, e.g., [25]). We can also get back results for the ?1 /?? -norm [14]. As shown in [15], proof techniques are similar and are adapted through the decomposition properties from Proposition 4. Proposition 6 (Support recovery) Assume that y = Xw? + ??, where ? is a standard multivariate normal vector. Let Q = n1 X ? X ? Rp?p . Denote by J the smallest stable set containing the support Supp(w? ) of w? . Define ? = minj,wj? 6=0 |wj? | > 0, assume ? = ?min (QJJ ) > 0 and that ?? for ? > 0, (?J )? [(?J (Q?1 ? is unique JJ QJj ))j?J c ] 6 1 ? ?. Then, if ? 6 2c(J) , the minimizer w ? ???(J) n  ? , where z is a and has support equal to J, with probability larger than 1 ? 3P ? (z) > 2? multivariate normal with covariance matrix Q. Proposition 7 (Consistency) Assume that y = Xw? + ??, where ? is a standard multivariate normal vector. Let Q = n1 X ? X ? Rp?p . Denote by J the smallest stable set containing the support Supp(w? ) of w? . Assume that for all ? such that ?J (?J c ) 6 3?J (?J ), ?? Q? > ?k?J k22 . Then 2 2 2 ? ? ? ? Xw? k22 6 36c(J) and n1 kX w we have ?(w ? ? w? ) 6 24c(J) 2 ??(J)2 , with probability larger than ? ??(J) n where z is a multivariate normal with covariance matrix Q. 1 ? P ?? (z) > ??(J) 2? Proposition 8 (Concentration inequalities) Let z be a normal variable with covariance matrix Q. 2 P (A)2 /2  Let T be the set of stable inseparable sets. Then P (?? (z) > t) 6 A?T 2|A| exp ? t1F? Q . AA 1 7 Experiments We provide illustrations on toy examples of some of the results presented in the paper. We consider the regularized least-squares problem of Eq. (3), with data generated as follows: given p, n, k, the design matrix X ? Rn?p is a matrix of i.i.d. Gaussian components, normalized to have unit ?2 norm columns. A set J of cardinality k is chosen at random and the weights wJ? are sampled from a standard multivariate Gaussian distribution and wJ? c = 0. We then take y = Xw? +n?1/2 kXw? k2 ? where ? is a standard Gaussian vector (this corresponds to a unit signal-to-noise ratio). Proximal methods vs. subgradient descent. For the submodular function F (A) = |A|1/2 (a simple submodular function beyond the cardinality) we compare three optimization algorithms described in Section 5, subgradient descent and two proximal methods, ISTA and its accelerated version FISTA [23], for p = n = 1000, k = 100 and ? = 0.1. Other settings and other set-functions would lead to similar results than the ones presented in Figure 4: FISTA is faster than ISTA, and much faster than subgradient descent. Relaxation of combinatorial optimization problem. We compare three strategies for solving 1 the combinatorial optimization problem minw?Rp 2n ky ? Xwk22 + ?F (Supp(w)) with F (A) = ? tr(XA XA )1/2 , the approach based on our sparsity-inducing norms, the simpler greedy (forward selection) approach proposed in [8, 3], and by thresholding the ordinary least-squares estimate. For all methods, we try all possible regularization parameters. We see in the right plots of Figure 4 that 7 f(w)?min(f) ?5 10 0 20 40 60 time (seconds) 1 thresholded OLS greedy submodular 0.5 0 0 20 penalty residual error fista ista subgradient residual error 1 0 10 0.5 40 0 0 20 penalty 40 Figure 4: (Left) Comparison of iterative optimization algorithms (value of objective function vs. running time). (Middle/Right) Relaxation of combinatorial optimization problem, showing residual er? 22 vs. penalty F (Supp(w)): ? (middle) high-dimensional case (p = 120, n = 20, ror n1 ky ? X wk k = 40), (right) lower-dimensional case (p = 120, n = 120, k = 40). p 120 120 120 120 120 120 120 120 120 120 120 120 n 120 120 120 120 120 120 20 20 20 20 20 20 k 80 40 20 10 6 4 80 40 20 10 6 4 submodular 40.8 ? 0.8 35.9 ? 0.8 29.0 ? 1.0 20.4 ? 1.0 15.4 ? 0.9 11.7 ? 0.9 46.8 ? 2.1 47.9 ? 1.9 49.4 ? 2.0 49.2 ? 2.0 43.5 ? 2.0 41.0 ? 2.1 ?2 vs. submod. -2.6 ? 0.5 2.4 ? 0.4 9.4 ? 0.5 17.5 ? 0.5 22.7 ? 0.5 26.3 ? 0.5 -0.6 ? 0.5 -0.3 ? 0.5 0.4 ? 0.5 0.0 ? 0.6 3.5 ? 0.8 4.8 ? 0.7 ?1 vs. submod. 0.6 ? 0.0 0.3 ? 0.0 -0.1 ? 0.0 -0.2 ? 0.0 -0.2 ? 0.0 -0.1 ? 0.0 3.0 ? 0.9 3.5 ? 0.9 2.2 ? 0.8 1.0 ? 0.8 0.9 ? 0.6 -1.3 ? 0.5 greedy vs. submod. 21.8 ? 0.9 15.8 ? 1.0 6.7 ? 0.9 -2.8 ? 0.8 -5.3 ? 0.8 -6.0 ? 0.8 22.9 ? 2.3 23.7 ? 2.0 23.5 ? 2.1 20.3 ? 2.6 24.4 ? 3.0 25.1 ? 3.5 Table 1: Normalized mean-square prediction errors kX w ? ? Xw? k22 /n (multiplied by 100) with optimal ? regularization parameters (averaged over 50 replications, with standard deviations divided by 50). The performance of the submodular method is shown, then differences from all methods to this particular one are computed, and shown in bold when they are significantly greater than zero, as measured by a paired t-test with level 5% (i.e., when the submodular method is significantly better). for hard cases (middle plot) convex optimization techniques perform better than other approaches, while for easier cases with more observations (right plot), it does as well as greedy approaches. Non factorial priors for variable selection. We now focus on the predictive performance and ? compare our new norm with F (A) = tr(XA XA )1/2 , with greedy approaches [3] and to regularization by ?1 or ?2 norms. As shown in Table 1, the new norm based on non-factorial priors is more robust than the ?1 -norm to lower number of observations n and to larger cardinality of support k. 8 Conclusions We have presented a family of sparsity-inducing norms dedicated to incorporating prior knowledge or structural constraints on the support of linear predictors. We have provided a set of common algorithms and theoretical results, as well as simulations on synthetic examples illustrating the good behavior of these norms. Several avenues are worth investigating: first, we could follow current practice in sparse methods, e.g., by considering related adapted concave penalties to enhance sparsity-inducing norms, or by extending some of the concepts for norms of matrices, with potential applications in matrix factorization or multi-task learning (see, e.g., [27] for application of submodular functions to dictionary learning). Second, links between submodularity and sparsity could be studied further, in particular by considering submodular relaxations of other combinatorial functions, or studying links with other polyhedral norms such as the total variation, which are known to be similarly associated with symmetric submodular set-functions such as graph cuts [24]. Acknowledgements. This paper was partially supported by the Agence Nationale de la Recherche (MGA Project) and the European Research Council (SIERRA Project). The author would like to thank Edouard Grave, Rodolphe Jenatton, Armand Joulin, Julien Mairal and Guillaume Obozinski for discussions related to this work. 8 References [1] P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite absolute penalties. Annals of Statistics, 37(6A):3468?3497, 2009. [2] R. Jenatton, J.Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. Technical report, arXiv:0904.3523, 2009. [3] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In Proc. ICML, 2009. [4] L. Jacob, G. Obozinski, and J.-P. Vert. Group Lasso with overlaps and graph Lasso. In Proc. ICML, 2009. [5] S. Kim and E. Xing. Tree-guided group Lasso for multi-task regression with structured sparsity. In Proc. ICML, 2010. [6] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In Proc. ICML, 2010. [7] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Network flow algorithms for structured sparsity. In Adv. NIPS, 2010. [8] J. Haupt and R. Nowak. Signal reconstruction from noisy random projections. IEEE Transactions on Information Theory, 52(9):4036?4048, 2006. [9] F Bach. Convex analysis and optimization with submodular functions: a tutorial. Technical Report 00527714, HAL, 2010. [10] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. In Proc. UAI, 2005. [11] Y. Kawahara, K. Nagano, K. Tsuda, and J.A. Bilmes. Submodularity cuts and applications. In Adv. NIPS, 2009. [12] S. Fujishige. Submodular Functions and Optimization. Elsevier, 2005. [13] J. Edmonds. Submodular functions, matroids, and certain polyhedra. In Combinatorial optimization - Eureka, you shrink!, pages 11?26. Springer, 2003. [14] S. Negahban and M. J. Wainwright. Joint support recovery under high-dimensional scaling: Benefits and perils of ?1 -?? -regularization. In Adv. NIPS, 2008. [15] F. Bach. Structured sparsity-inducing norms through submodular functions. Technical Report 00511310, HAL, 2010. [16] S. P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [17] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In Proc. AISTATS, 2009. [18] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused Lasso. J. Roy. Stat. Soc. B, 67(1):91?108, 2005. [19] R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge Univ. Press, 1990. [20] T. Ando. Concavity of certain maps on positive definite matrices and applications to hadamard products. Linear Algebra and its Applications, 26:203?241, 1979. [21] C. L. Mallows. Some comments on Cp . Technometrics, 15(4):661?675, 1973. [22] D. Wipf and S. Nagarajan. Sparse estimation using general likelihoods and non-factorial priors. In Adv. NIPS, 2009. [23] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009. [24] A. Chambolle and J. Darbon. On total variation minimization and surface evolution using parametric maximum flows. International Journal of Computer Vision, 84(3):288?307, 2009. [25] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. In Adv. NIPS, 2009. [26] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541?2563, 2006. [27] A. Krause and V. Cevher. Submodular dictionary selection for sparse representation. In Proc. ICML, 2010. 9
3933 |@word illustrating:1 middle:3 version:1 polynomial:1 norm:91 armand:1 simulation:3 contraction:1 decomposition:3 covariance:3 jacob:1 tr:7 selecting:3 ecole:1 existing:1 current:1 readily:1 happen:1 partition:2 j1:4 shape:1 designed:3 interpretable:1 plot:3 v:6 greedy:12 pursued:1 xk:2 recherche:1 simpler:2 zhang:1 mga:1 direct:1 replication:1 prove:1 polyhedral:6 xwk22:2 lov:7 indeed:4 behavior:4 multi:2 decreasing:2 cardinality:17 increasing:3 considering:2 project:3 provided:3 underlying:1 notation:2 moreover:7 confused:1 what:1 parsimony:3 pursue:2 maxa:2 unified:1 finding:3 guarantee:1 concave:2 exactly:2 wrong:1 k2:2 unit:11 positive:8 path:1 inria:1 black:1 plus:1 might:1 studied:1 equivalence:1 edouard:1 limited:2 factorization:1 range:2 averaged:1 directed:1 practical:1 unique:3 horn:1 mallow:2 union:1 practice:1 definite:1 area:1 submatrices:2 significantly:2 composite:1 vert:1 projection:1 boyd:1 regular:1 get:4 cannot:1 selection:11 operator:4 context:3 restriction:2 fruitful:1 equivalent:4 map:1 go:4 independently:1 convex:20 focused:1 rectangular:1 simplicity:1 recovery:6 unstructured:1 decomposable:2 insight:2 estimator:1 deriving:1 vandenberghe:1 rocha:1 notion:1 variation:3 limiting:1 annals:1 play:2 heavily:1 exact:1 programming:1 element:2 roy:1 jk:4 particularly:1 cut:3 observed:1 role:2 wj:14 adv:5 decrease:1 knight:1 complexity:1 solving:2 reinterpretation:2 ror:1 algebra:1 predictive:1 easily:1 joint:1 various:1 intersects:1 univ:1 informatique:1 forced:1 fast:1 kawahara:1 saunders:1 emerged:1 modular:1 larger:4 valued:1 solve:1 grave:1 favor:1 statistic:5 nondecreasing:12 noisy:1 sequence:3 eigenvalue:3 differentiable:1 reconstruction:1 coming:1 product:1 fr:1 turned:1 hadamard:1 nagano:1 inducing:12 ky:3 wjk:1 convergence:1 empty:4 extending:1 sierra:1 depending:2 stat:1 measured:1 eq:5 solves:1 soc:1 implies:1 come:1 submodularity:6 guided:1 correct:4 sjk:1 nagarajan:1 suffices:1 decompose:1 qjj:2 proposition:17 extension:9 strictly:6 considered:3 normal:5 exp:1 algorithmic:3 tor:1 inseparable:8 dictionary:3 smallest:2 estimation:3 proc:7 applicable:1 combinatorial:6 council:1 grouped:5 largest:1 vice:1 tool:4 weighted:2 minimization:4 always:3 gaussian:4 aim:2 normale:1 rather:4 asp:1 shrinkage:1 encode:1 derived:1 focus:1 rank:3 polyhedron:5 mainly:1 likelihood:2 kim:1 zk22:2 elsevier:1 inference:3 nonfactorial:1 ancestor:1 willow:1 france:1 dual:2 among:1 supw:1 marginal:1 equal:8 having:2 encouraged:1 kw:2 look:1 yu:3 icml:5 wipf:1 report:3 piecewise:1 few:1 composed:1 zoom:1 cheaper:1 maxj:1 replaced:1 beck:1 lebesgue:1 tq:1 n1:4 ando:1 freedom:1 technometrics:1 investigate:1 evaluation:1 rodolphe:1 extreme:11 semidefinite:1 regularizers:1 perimeter:1 nowak:1 minw:4 tree:2 desired:1 tsuda:1 theoretical:3 cevher:1 column:2 soft:1 teboulle:1 ksa:2 contiguous:3 cover:1 a6:2 ordinary:1 cost:1 vertex:1 subset:3 decomposability:2 deviation:1 predictor:3 johnson:1 proximal:10 synthetic:1 rosset:1 density:1 international:1 negahban:2 forbid:1 siam:1 invertible:1 enhance:1 together:2 fused:2 central:1 containing:2 huang:1 admit:1 inefficient:1 leading:5 zhao:2 toy:1 supp:14 potential:2 de:2 singleton:5 bold:1 wk:2 includes:1 invertibility:1 audibert:1 depends:1 later:1 try:1 closed:1 kwk:1 francis:2 sup:1 red:1 recover:1 xing:1 contribution:1 square:5 efficiently:1 ensemble:1 correspond:4 peril:1 bayesian:2 metaxas:1 bilmes:1 worth:1 minj:2 definition:2 pp:2 naturally:1 proof:5 associated:2 sampled:1 knowledge:5 organized:1 actually:1 back:2 jenatton:5 dt:1 supervised:5 follow:2 response:1 done:1 shrink:1 chambolle:1 generality:1 xa:18 replacing:1 overlapping:7 defines:2 scientific:1 hal:2 effect:2 k22:3 concept:2 normalized:2 evolution:1 regularization:8 hence:1 equality:1 symmetric:2 submod:3 illustrated:1 undesired:1 sin:1 dedicated:3 cp:1 fj:3 image:1 novel:3 nonmyopic:1 common:4 ols:1 polymatroid:1 empirically:1 extend:6 interpretation:3 versa:1 cambridge:2 smoothness:1 erieure:1 grid:2 consistency:3 similarly:1 submodular:59 stable:22 specification:1 surface:1 multivariate:5 agence:1 recent:1 certain:4 inequality:3 seen:2 minimum:4 additional:1 greater:1 guestrin:1 determine:1 signal:3 ii:2 full:2 mix:1 rj:3 reduces:1 technical:3 faster:3 bach:8 divided:1 ravikumar:1 paired:1 prediction:2 variant:1 regression:1 vision:3 circumstance:1 arxiv:1 krause:2 interval:2 laboratoire:1 envelope:6 w2:1 minb:1 asz:7 comment:1 fujishige:1 flow:2 structural:2 near:1 iii:2 identified:1 topology:1 lasso:6 regarding:2 avenue:1 det:1 penalty:10 jj:1 factorial:8 nonparametric:1 extensively:1 outperform:1 tutorial:2 sign:1 tibshirani:1 darbon:1 edmonds:1 group:19 changing:1 penalizing:2 thresholded:1 rectangle:1 eureka:1 imaging:1 graph:4 relaxation:5 subgradient:7 sum:2 inverse:1 you:1 extends:2 family:3 throughout:1 scaling:1 submatrix:2 bound:3 subcases:2 tackled:1 nonnegative:2 adapted:2 constraint:2 min:4 subgradients:2 separable:3 relatively:1 structured:14 ball:10 combination:1 smaller:1 making:1 restricted:2 gradually:1 turn:2 describing:1 count:1 studying:1 tightest:1 multiplied:1 apply:2 kwg:1 hierarchical:4 generic:3 spectral:1 frequentist:1 alternative:1 rp:26 algorithm1:1 denotes:4 running:1 graphical:1 kwkq:1 xw:5 k1:2 disappear:1 classical:2 objective:1 already:1 strategy:1 concentration:3 parametric:1 traditional:1 said:1 exhibit:1 gradient:1 link:5 thank:1 topic:2 polytope:1 considers:1 relationship:1 illustration:1 ratio:1 minimizing:5 unfortunately:1 potentially:5 trace:1 design:4 contraints:1 perform:1 observation:2 descent:4 situation:5 maxk:1 team:1 rn:5 kxw:1 bk:4 complement:2 namely:3 paris:1 subvector:1 extensive:1 toolbox:1 specified:1 nip:5 beyond:3 able:1 pattern:12 sparsity:27 program:1 max:3 wainwright:2 power:1 overlap:1 regularized:4 indicator:2 residual:3 zhu:1 counterbalance:1 julien:1 wj1:2 text:1 prior:14 review:2 qaa:2 acknowledgement:1 loss:1 haupt:1 highlight:1 interesting:4 acyclic:1 penalization:1 degree:1 sufficient:2 thresholding:3 penalized:1 supported:1 soon:4 side:3 allow:1 face:2 absolute:3 sparse:9 matroids:2 benefit:1 dimension:1 concavity:1 forward:3 commonly:1 jump:2 author:1 transaction:1 gene:1 monotonicity:3 active:1 investigating:1 uai:1 mairal:3 b1:4 forbids:1 continuous:2 iterative:2 sk:2 table:2 robust:1 cl:1 european:1 domain:1 aistats:1 pk:1 main:1 joulin:1 wjp:2 noise:1 allowed:7 ista:4 augmented:1 referred:1 en:1 slow:1 sub:1 explicit:2 exponential:1 specific:5 showing:2 er:1 maximizers:1 incorporating:1 restricting:1 adding:3 gained:1 kx:2 nationale:1 gap:2 easier:1 entropy:1 intersection:3 simply:1 partially:1 kwj:1 springer:1 aa:1 corresponds:4 minimizer:3 relies:1 obozinski:5 lipschitz:1 hard:1 fista:4 included:1 reducing:1 contrasted:1 principal:1 total:3 la:1 guillaume:1 highdimensional:1 support:23 bioinformatics:2 accelerated:2 absolutely:1 incorporate:1
3,239
3,934
Shadow Dirichlet for Restricted Probability Modeling Bela A. Frigyik, Maya R. Gupta, and Yihua Chen Department of Electrical Engineering University of Washington Seattle, WA 98195 [email protected], [email protected], [email protected] Abstract Although the Dirichlet distribution is widely used, the independence structure of its components limits its accuracy as a model. The proposed shadow Dirichlet distribution manipulates the support in order to model probability mass functions (pmfs) with dependencies or constraints that often arise in real world problems, such as regularized pmfs, monotonic pmfs, and pmfs with bounded variation. We describe some properties of this new class of distributions, provide maximum entropy constructions, give an expectation-maximization method for estimating the mean parameter, and illustrate with real data. 1 Modeling Probabilities for Machine Learning Modeling probability mass functions (pmfs) as random is useful in solving many real-world problems. A common random model for pmfs is the Dirichlet distribution [1]. The Dirichlet is conjugate to the multinomial and hence mathematically convenient for Bayesian inference, and the number of parameters is conveniently linear in the size of the sample space. However, the Dirichlet is a distribution over the entire probability simplex, and for many problems this is simply the wrong domain if there is application-specific prior knowledge that the pmfs come from a restricted subset of the simplex. For example, in natural language modeling, it is common to regularize a pmf over n-grams by some generic language model distribution q0 , that is, the pmf to be modeled is assumed to have the form ? = ?q + (1 ? ?)q0 for some q in the simplex, ? ? (0, 1) and a fixed generic model q0 [2]. But once q0 and ? are fixed, the pmf ? can only come from a subset of the simplex. Another natural language processing example is modeling the probability of keywords in a dictionary where some words are related, such as espresso and latte, and evidence for the one is to some extent evidence for the other. This relationship can be captured with a bounded variation model that would constrain the modeled probability of espresso to be within some  of the modeled probability of latte. We show that such bounds on the variation between pmf components also restrict the domain of the pmf to a subset of the simplex. As a third example of restricting the domain, the similarity discriminant analysis classifier estimates class-conditional pmfs that are constrained to be monotonically increasing over an ordered sample space of discrete similarity values [3]. In this paper we propose a simple variant of the Dirichlet whose support is a subset of the simplex, explore its properties, and show how to learn the model from data. We first discuss the alternative solution of renormalizing the Dirichlet over the desired subset of the simplex, and other related work. Then we propose the shadow Dirichlet distribution; explain how to construct a shadow Dirichlet for three types of restricted domains: the regularized pmf case, bounded variation between pmf components, and monotonic pmfs; and discuss the most general case. We show how to use the expectation-maximization (EM) algorithm to estimate the shadow Dirichlet parameter ?, and present simulation results for the estimation. 1 Dirichlet Shadow Dirichlet Renormalized Dirichlet Figure 1: Dirichlet, shadow Dirichlet, and renormalized Dirichlet for ? = [3.94 2.25 2.81]. 2 Related Work One solution to modeling pmfs on only a subset of the simplex is to simply restrict the support of ? and renormalize the Dirichlet over S? (see Fig. 1 for an exthe Dirichlet to the desired support S, ample). This renormalized Dirichlet has the advantage that it is still a conjugate distribution for the multinomial. Nallapati et al.considered the renormalized Dirichlet for language modeling, but found it difficult to use because the density requires numerical integration to compute the normalizer [4] . In addition, there is no closed form solution for the mean, covariance, or peak of the renormalized Dirichlet, making it difficult to work with. Table 1 summarizes these properties. Additionally, generating samples from the renormalized Dirichlet is inefficient: one draws samples from the stan? For high-dimensional sample spaces, this dard Dirichlet, then rejects realizations that are outside S. could greatly increase the time to generate samples. Although the Dirichlet is a classic and popular distribution on the simplex, Aitchison warns it ?is totally inadequate for the description of the variability of compositional data,? because of its ?implied independence structure and so the Dirichlet class is unlikely to be of any great use for describing compositions whose components have even weak forms of dependence? [5]. Aitchison instead championed a logistic normal distribution with more parameters to control covariance between components. A number of variants of the Dirichlet that can capture more dependence have been proposed and analyzed. For example, the scaled Dirichlet enables a more flexible shape for the distribution P [5], but does not change the support. The original Dirichlet(?1 , ?2 , . . . ?d ) can be derived as Yj / j Yj where Yj ? ?(?j , ?), whereas the scaled Dirichlet is derived from Yj ? ?(?j , ?j ), resulting in ? ?1 ? Q ? j? j density p(?) = ? j (P ?ij?i )?j 1 +???+?d , where ?, ? ? Rd+ are parameters, and ? is the normalizer. i Another variant is the generalized Dirichlet [6] which also has parameters ?, ? ? Rd+ , and allows greater control of the covariance structure, again without changing the support. As perhaps first noted by Karl Pearson [7] and expounded upon by Aitchison [5], correlations of proportional data can be very misleading. Many Dirichlet variants have been generalizations of the Connor-Mossiman variant, Dirichlet process variants, other compound Dirichlet models, and hierarchical Dirichlet models. Ongaro et al. [8] propose the flexible Dirichlet distribution by forming a re-parameterized mixture of Dirichlet distributions. Rayens and Srinivasan [9] considered the dependence structure for the general Dirichlet family called the generalized Liouville distributions. In contrast to prior efforts, the shadow Dirichlet manipulates the support to achieve various kinds of dependence that arise frequently in machine learning problems. 3 Shadow Dirichlet Distribution We introduce a new distribution that we call the shadow Dirichlet distribution. Let S be the prob? ? S be a random pmf drawn from a Dirichlet distribution with ability (d ? 1)-simplex, and let ? density pD and unnormalized parameter ? ? Rd+ . Then we say the random pmf ? ? S is distributed ? for some fixed d ? d left-stochastic (that according to a shadow Dirichlet distribution if ? = M ? ? the genis, each column of M sums to 1) full-rank (and hence invertible) matrix M , and we call ? 2 erating Dirichlet of ?, or ??s Dirichlet shadow. Because M is a left-stochastic linear map between finite-dimensional spaces, it is a continuous map from the convex and compact S to a convex and compact subset of S that we denote SM . The shadow Dirichlet has two parameters: the generating Dirichlet?s parameter ? ? Rd+ , and the d ? d matrix M . Both ? and M can be estimated from data. However, as we show in the following subsections, the matrix M can be profitably used as a design parameter that is chosen based on application-specific knowledge or side-information to specify the restricted domain SM , and in that way impose dependency between the components of the random pmfs. The shadow Dirichlet density p(?) is the normalized pushforward of the Dirichlet density, that is, it is the composition of the Dirichlet density and M ?1 with the Jacobian: Y 1 ? ?1 p(?) = (M ?1 ?)j j , (1) B(?) |det(M )| j Q Pd j ?(?j ) is the standard Dirichlet normalizer, and ?0 = j=1 ?j is the standard where B(?) , ?(? 0) Dirichlet precision factor. Table 1 summarizes the basic properties of the shadow Dirichlet. Fig. 1 shows an example shadow Dirichlet distribution. Generating samples from the shadow Dirichlet is trivial: generate samples from its generating Dirichlet (for example, using stick-breaking or urn-drawing) and multiply each sample by M to create the corresponding shadow Dirichlet sample. Table 1: Table compares and summarizes the Dirichlet, renormalized Dirichlet, and shadow Dirichlet distributions. Dirichlet(?) Density p(?) Mean 1 B(?) Qd j=1 ? ?1 ?j j Shadow Dirichlet (?, M ) 1 B(?)|det(M )| ? ?0 Qd j=1 (M ?1 Renormalized ? Dirichlet (?, S) ? ?1 ?)j j ? S M Qd 1 R Qd ?j ?1 j=1 qj ? ?0 R dq j=1 ? ?1 ?j j ?p(?)d? S? ? ? ?) ? T p(?)d? ? ?)(? Cov(?) M Cov(?)M T ?j ?1 ?0 ?d M ?0j ?d max p(?) stick-breaking, urn-drawing draw from Dirichlet(?), multiply by M draw from Dirichlet(?), reject if not in S? ML Estimate iterative (simple functions) iterative (simple functions) unknown complexity ML Compound Estimate iterative (simple functions) iterative (numerical integration) unknown complexity Covariance Mode (if ? > 1) How to Sample 3.1 ? ?1 R (? S? ??S? Example: Regularized Pmfs The shadow Dirichlet can be designed to specify a distribution over a set of regularized pmfs SM = ? ?? ? S}, for specific values of ? and ?. ? In general, for a given ? and ?? ? S, {? ? = ??? + (1 ? ?)?, the following d ? d matrix M will change the support to the desired subset SM by mapping the extreme points of S to the extreme points of SM : ? T + ?I, M = (1 ? ?)?1 (2) where I is the d ? d identity matrix. In Section 4 we show that the M given in (2) is optimal in a maximum entropy sense. 3 3.2 Example: Bounded Variation Pmfs We describe how to use the shadow Dirichlet to model a random pmf that has bounded variation such that |?k ? ?l | ? k,l for any k, ` ? {1, 2, . . . , d} and k,l > 0. To construct specified bounds on the variation, we first analyze the variation for a given M . For any d ? d left stochastic matrix hP iT Pd d M , ? = M ?? = M1j ??j . . . Mdj ??j , so the difference between any two entries is j=1 j=1 X X ? |Mkj ? Mlj | ??j . |?k ? ?l | = (Mkj ? Mlj )?j ? j j (3) Thus, to obtain a distribution over pmfs with bounded |?k ? ?` | ? k,l for any k, ` components, it is sufficient to choose components of the matrix M such that |Mkj ? Mlj | ? k,l for all j = 1, . . . , d because ?? in (3) sums to 1. One way to create such an M is using  the regularization strategy described in Section 3.1. For this case, the jth component of ? is ?j = M ?? = ???j + (1 ? ?)??j , and thus the variation between the j ith and jth component of any pmf in SM is: |?i ? ?j | = ???i + (1 ? ?)??i ? ???j ? (1 ? ?)??j ? ? ??i ? ??j + (1 ? ?) ??i ? ??j ? ? + (1 ? ?) max ??i ? ??j . i,j (4) ? one can impose the bounded variation Thus by choosing an appropriate ? and regularizing pmf ?, ? given by (4). For example, set ? to be the uniform pmf, and choose any ? ? (0, 1), then the matrix M given by (2) will guarantee that the difference between any two entries of any pmf drawn from the shadow Dirichlet (M, ?) will be less than or equal to ?. 3.3 Example: Monotonic Pmfs For pmfs over ordered components, it may be desirable to restrict the support of the random pmf distribution to only monotonically increasing pmfs (or to only monotonically decreasing pmfs). A d ? d left-stochastic matrix M that will result in a shadow Dirichlet that generates only monotonically increasing d ? 1 pmfs has kth column [0 . . . 0 1/(d ? k + 1) . . . 1/(d ? k + 1)]T , we call this the monotonic M . It is easy to see that with this M only monotonic ??s can be produced, 1 ? because ?1 = d1 ??1 which is less than or equal to ?2 = d1 ??1 + d?1 ?2 and so on. In Section 4 we show that the monotonic M is optimal in a maximum entropy sense. Note that to provide support over both monotonically increasing and decreasing pmfs with one distribution is not achievable with a shadow Dirichlet, but could be achieved by a mixture of two shadow Dirichlets. 3.4 What Restricted Subsets are Possible? Above we have described solutions to construct M for three kinds of dependence that arise in machine learning applications. Here we consider the more general question: What subsets of the simplex can be the support of the shadow Dirichlet, and how to design a shadow Dirichlet for a particular support? For any matrix M , by the Krein-Milman theorem [10], SM = M S is the convex hull of its extreme points. If M is injective, the extreme points of SM are easy to specify, as a d ? d matrix M will have d extreme points that occur for the d choices of ? that have only one nonzero component, as the rest of the ? will create a non-trivial convex combination of the columns of M , and therefore cannot result in extreme points of SM by definition. That is, the extreme points of SM are the d columns of M , and one can design any SM with d extreme points by setting the columns of M to be those extreme pmfs. However, if one wants the new support to be a polytope in the probability (d ? 1)-simplex with m > d extreme points, then one must use a fat M with d ? m entries. Let S m denote the probability 4 (m ? 1)-simplex, then the domain of the shadow Dirichlet will be M S m , which is the convex hull of the m columns of M and forms a convex polytope in S with at most m vertices. In this case M cannot be injective, and hence it is not bijective between S m and M S m . However, a density on M S m can be defined as: Z Y ? ?1 1 ? p(?) = (5) ??j j d?. B(?) {?? M ?=?} ? j On the other hand, if one wants the support to be a low-dimensional polytope subset of a higherdimensional probability simplex, then a thin d ? m matrix M , where m < d, can be used to implement this. If M is injective, then it has a left inverse M ? that is a matrix of dimension m ? d, and the normalized pushforward of the original density can be used as a density on the image M S m : p(?) = 1 1/2 B(?) |det(M T M )| Y ? ?1 (M ? ?)j j , j If M is not injective then one way to determine a density is to use (5). 4 Information-theoretic Properties In this section we note two information-theoretic properties of the shadow Dirichlet. Let ? be drawn ? be drawn from pD . Then the from shadow Dirichlet density pM , and let its generating Dirichlet ? differential entropy of the shadow Dirichlet is h(pM ) = log |det(M )| + h(pD ), where h(pD ) is the differential entropy of its generating Dirichlet. In fact, the shadow Dirichlet always has less entropy than its Dirichlet shadow because log |det(M )| ? 0, which can be shown as a corollary to the following lemma (proof not included due to lack of space): Lemma 4.1. Let {x1 , . . . , xn } andP {y1 , . . . , yn }Pbe column vectors in Rn . If each yj is a convex n n combination of the xi ?s, i.e. yj = i=1 ?ji xi , i=1 ?ji = 1, ?jk ? 0, ?j, k ? {1, . . . , n} then |det[y1 , . . . , yn ]| ? |det[x1 , . . . , xn ]|. It follows from Lemma 4.1 that the constructive solutions for M given in (2) and the monotonic M are optimal in the sense of maximizing entropy: Corollary 4.1. Let Mreg be the set of left-stochastic matrices M that parameterize shadow Dirichlet ? ?? ? S}, for a specific choice of ? and ?. ? distributions with support in {? ? = ??? + (1 ? ?)?, Then the M given in (2) results in the shadow Dirichlet with maximum entropy, that is, (2) solves arg maxM ?Mreg h(pM ). Corollary 4.2. Let Mmono be the set of left-stochastic matrices M that parameterize shadow Dirichlet distributions that generate only monotonic pmfs. Then the monotonic M given in Section 3.3 results in the shadow Dirichlet with maximum entropy, that is, the monotonic M solves arg maxM ?Mmono h(pM ). 5 Estimating the Distribution from Data In this section, we discuss the estimation of ? for the shadow Dirichlet and compound shadow Dirichlet, and the estimation of M . 5.1 Estimating ? for the Shadow Dirichlet Let matrix M be specified (for example, as described in the subsections of Section 3), and let q be a d ? N matrix where the ith column qi is the ith sample pmf for i = 1 . . . N , and let (qi )j be the jth component of the ith sample pmf for j = 1, . . . , d. Then finding the maximum likelihood estimate 5 of ? for the shadow Dirichlet is straightforward: ? ? N YY 1 ? ?1 j ? arg max log + log ? p(qi |?) ? arg max log (M ?1 qi )j B(?) |det(M )| ??Rk ??Rk + + i=1 i j ? ? YY 1 ? ?1 j ?, (? qi )j (6) ? arg max log ? B(?)N i j ??Rk + N Y  where q? = M ?1 q. Note (6) is the maximum likelihood estimation problem for the Dirichlet distribution given the matrix q?, and can be solved using the standard methods for that problem (see e.g. [11, 12]). 5.2 Estimating ? for the Compound Shadow Dirichlet For many machine learning applications the given data are modeled as samples from realizations of a random pmf, and given these samples one must estimate the random pmf model?s parameters. We refer to this case as the compound shadow Dirichlet, analogous to the compound Dirichlet (also called the multivariate P?olya distribution). Assuming one has already specified M , we first discuss method of moments estimation, and then describe an expectation-maximization (EM) method for computing the maximum likelihood estimate ? ?. One can form an estimate of ? by the method of moments. For the standard compound Dirichlet, one treats the samples of the realizations as normalized empirical histograms, sets the normalized ? parameter equal to the empirical mean of the normalized histograms, and uses the empirical variances to determine the precision ?0 . By definition, this estimate will be less likely than the maximum likelihood estimate, but may be a practical short-cut in some cases. For the compound shadow Dirichlet, we believe the method of moments estimator will be a poorer estimate in general. The problem is that if one draws samples from a pmf ? from a restricted subset SM of the simplex, then the normalized empirical histogram ?? of those samples may not be in SM . For example given a monotonic pmf, the histogram of five samples drawn from it may not be monotonic. Then the empirical mean of such normalized empirical histograms may not be in SM , and so setting the shadow Dirichlet mean M ? equal to the empirical mean may lead to an infeasible estimate (one that is outside SM ). A heuristic solution is to project the empirical mean into SM first, for example, by finding the nearest pmf in SM in squared error or relative entropy. As with the compound Dirichlet, this may still be a useful approach in practice for some problems. Next we state an EM method to find the maximum likelihood estimate ? ? . Let s be a d ? N matrix of sample histograms from different experiments, such that the ith column si is the ith histogram for i = 1, . . . , N , and (si )j is the number of times we have observed the jth event from the ith pmf vi . Then the maximum log-likelihood estimate of ? solves arg max log p(s|?) for ? ? Rk+ . If the random pmfs are drawn from a Dirichlet distribution, then finding this maximum likelihood estimate requires an iterative procedure, and can be done in several ways including a gradient descent (ascent) approach. However, if the random pmfs are drawn from a shadow Dirichlet distribution, then a direct gradient descent approach is highly inconvenient as it requires taking derivatives of numerical integrals. However, it is practical to apply the expectation-maximization (EM) algorithm [13][14], as we describe in the rest of this section. Code to perform the EM estimation of ? can be downloaded from idl.ee.washington.edu/publications.php. Q We assume that the experiments are independent andP therefore p(s|?) = p({si }|?) = i p(si |?) and hence arg max??Rk+ log p(s|?) = arg max??Rk+ i log p(si |?). To apply the EM method, we consider the complete data to be the sample histograms s and the pmfs that generated them (s, v1 , v2 , . . . , vN ), whose expected log-likelihood will be maximized. Specifically, because of the assumed independence of the {vi }, the EM method requires one to repeatedly maximize the Q-function such that the estimate of ? at the (m + 1)th iteration is: ?(m+1) = arg max ??Rk + N X Evi |si ,?(m) [log p(vi |?)] . i=1 6 (7) Like the compound Dirichlet likelihood, the compound shadow Dirichlet likelihood is not necessarily concave. However, note that the  Q-function given in (7) is concave, because log p(vi |?) = ? log |det(M )| + log pD,? M ?1 vi , where pD,? is the Dirichlet distribution with parameter ?, and by a theorem of Ronning [11], log pD,? is a concave function, and adding a constant does not change the concavity. The Q-function is a finite integration of such concave functions and hence also concave [15]. We simplify (7) without destroying the concavity to yield the equivalent problem ?(m+1) = Pd Pd arg max g(?) for ? ? Rk+ , where g(?) = log ?(?0 ) ? j=1 log ?(?j ) + j=1 ?j ?j , and PN t ?j = N1 i=1 ziji , where tij and zi are integrals we compute with Monte Carlo integration: Z tij = log(M ?1 vi )j ?i SM ?i SM (s ) (vi )k i k pM (vi |?(m) )dvi k=1 Z zi = d Y d Y (vi )j k(si )k pM (vi |?(m) )dvi , k=1 where ?i is the normalization constant for the multinomial with histogram si . We apply the Newton method [16] to maximize g(?), where the gradient ?g(?) has kth component ?0 (?0 ) ? ?0 (?1 ) + ?1 , where ?0 denotes the digamma function. Let ?1 denote the trigamma function, then the Hessian matrix of g(?) is: H = ?1 (?0 )11T ? diag (?1 (?1 ), . . . , ?1 (?d )) . Note that because H has a very simple structure, the inversion of H required by the Newton step is greatly simplified by using the Woodbury identity [17]: H ?1 = ? diag(?1 , . . . , ?d ) ? 1 1 P1 [?i ?j ]d?d , where ?0 = ?1 (? and ?j = ?1 (? , j = 1, . . . , d. ? ? d ? 0) j) 0 5.3 j=1 j Estimating M for the Shadow Dirichlet Thus far we have discussed how to construct M to achieve certain desired properties and how to interpret a given M ?s effect on the support. In some cases it may be useful to estimate M directly from data, for example, finding the maximum likelihood M . In general, this is a non-convex problem because the set of rank d ? 1 matrices is not convex. However, we offer two approximations. First, note that as in estimating the support of a uniform distribution, the maximum likelihood M will correspond to a support that is no larger than needed to contain the convex hull of sample pmfs. Second, the mean of the empirical pmfs will be in the support, and thus a heuristic is to set the kth column of M (which corresponds to the kth vertex of the support) to be a convex combination of the kth vertex of the standard probability simplex and the empirical mean pmf. We provide code that finds the d optimal such convex combinations such that a specificed percentage of the sample pmfs are within the support, which reduces the non-convex problem of finding the maximum likelihood d ? d matrix M to a d-dimensional convex relaxation. 6 Demonstrations It is reasonable to believe that if the shadow Dirichlet better matches the problem?s statistics, it will perform better in practice, but an open question is how much better? To motivate the reader to investigate this question further in applications, we provide two small demonstrations. 6.1 Verifying the EM Estimation We used a broad suite of simulations to test and verify the EM estimation. Here we include a simple visual confirmation that the EM estimation works: we drew 100 i.i.d. pmfs from a shadow Dirichlet with monotonic M for d = 3 and ? = [3.94 2.25 2.81] (used in [18]). From each of the 100 pmfs, we drew 100 i.i.d. samples. Then we applied the EM algorithm to find the ? for both the standard compound Dirichlet, and the compound shadow Dirichlet with the correct M . Fig. 2 shows the true distribution and the two estimated distributions. 7 True Distribution (Shadow Dirichlet) Estimated Shadow Dirichlet Estimated Dirichlet Figure 2: Samples were drawn from the true distribution and the given EM method was applied to form the estimated distributions. 6.2 Estimating Proportions from Sales Manufacturers often have constrained manufacturing resources, such as equipment, inventory of raw materials, and employee time, with which to produce multiple products. The manufacturer must decide how to proportionally allocate such constrained resources across their product line based on their estimate of proportional sales. Manufacturer Artifact Puzzles gave us their past retail sales data for the 20 puzzles they sold during July 2009 through Dec 2009, which we used to predict the proportion of sales expected for each puzzle. These estimates were then tested on the next five months of sales data, for January 2010 through April 2010. The company also provided a similarity between puzzles S, where S(A, B) is the proportion of times an order during the six training months included both puzzle A and B if it included puzzle A. We compared treating each of the six training months of sales data as a sample from a compound Dirichlet versus or a compound shadow Dirichlet. For the shadow Dirichlet, we normalized each column of the similarity matrix S to sum to one so that it was left-stochastic, and used that as the M matrix; this forces puzzles that are often bought together to have closer estimated proportions. We estimated each ? parameter by EM to maximize the likelihood of the past sales data, and then estimated the future sales proportions to be the mean of the estimated Dirichlet or shadow Dirichlet distribution. We also compared with treating all six months of sales data as coming from one multinomial which we estimated as the maximum likelihood multinomial, and to taking the mean of the six empirical pmfs. Table 2: Squared errors between estimates and actual proportional sales. Jan. Feb. Mar. Apr. 7 Multinomial .0129 .0185 .0231 .0240 Mean Pmf .0106 .0206 .0222 .0260 Dirichlet .0109 .0172 .0227 .0235 Shadow Dirichlet .0093 .0164 .0197 .0222 Summary In this paper we have proposed a variant of the Dirichlet distribution that naturally captures some of the dependent structure that arises often in machine learning applications. We have discussed some of its theoretical properties, and shown how to specify the distribution for regularized pmfs, bounded variation pmfs, monotonic pmfs, and for any desired convex polytopal domain. We have derived the EM method and made available code to estimate both the shadow Dirichlet and compound shadow Dirichlet from data. Experimental results demonstrate that the EM method can estimate the shadow Dirichlet effectively, and that the shadow Dirichlet may provide worthwhile advantages in practice. 8 References [1] B. Frigyik, A. Kapila, and M. R. Gupta, ?Introduction to the Dirichlet distribution and related processes,? Tech. Rep., University of Washington, 2010. [2] C. Zhai and J. Lafferty, ?A study of smoothing methods for language models applied to information retrieval,? ACM Trans. on Information Systems, vol. 22, no. 2, pp. 179?214, 2004. [3] Y. Chen, E. K. Garcia, M. R. Gupta, A. Rahimi, and L. Cazzanti, ?Similarity-based classification: Concepts and algorithms,? Journal of Machine Learning Research, vol. 10, pp. 747?776, March 2009. [4] R. Nallapati, T. Minka, and S. Robertson, ?The smoothed-Dirichlet distribution: a building block for generative topic models,? Tech. Rep., Microsoft Research, Cambridge, 2007. [5] Aitchison, Statistical Analysis of Compositional Data, Chapman Hall, New York, 1986. [6] R. J. Connor and J. E. Mosiman, ?Concepts of independence for proportions with a generalization of the Dirichlet distibution,? Journal of the American Statistical Association, vol. 64, pp. 194?206, 1969. [7] K. Pearson, ?Mathematical contributions to the theory of evolution?on a form of spurious correlation which may arise when indices are used in the measurement of organs,? Proc. Royal Society of London, vol. 60, pp. 489?498, 1897. [8] A. Ongaro, S. Migliorati, and G. S. Monti, ?A new distribution on the simplex containing the Dirichlet family,? Proc. 3rd Compositional Data Analysis Workshop, 2008. [9] W. S. Rayens and C. Srinivasan, ?Dependence properties of generalized Liouville distributions on the simplex,? Journal of the American Statistical Association, vol. 89, no. 428, pp. 1465? 1470, 1994. [10] Walter Rudin, Functional Analysis, McGraw-Hill, New York, 1991. [11] G. Ronning, ?Maximum likelihood estimation of Dirichlet distributions,? Journal of Statistical Computation and Simulation, vol. 34, no. 4, pp. 215221, 1989. [12] T. Minka, ?Estimating a Dirichlet distribution,? Tech. Rep., Microsoft Research, Cambridge, 2009. [13] A. P. Dempster, N. M. Laird, and D. B. Rubin, ?Maximum likelihood from incomplete data via the EM algorithm,? Journal of the Royal Statistical Society: Series B (Methodological), vol. 39, no. 1, pp. 1?38, 1977. [14] M. R. Gupta and Y. Chen, Theory and Use of the EM Method, Foundations and Trends in Signal Processing, Hanover, MA, 2010. [15] R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, 1970. [16] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004. [17] K. B. Petersen and M. S. Pedersen, Matrix Cookbook, 2009, Available at matrixcookbook.com. [18] R. E. Madsen, D. Kauchak, and C. Elkan, ?Modeling word burstiness using the Dirichlet distribution,? in Proc. Intl. Conf. Machine Learning, 2005. 9
3934 |@word m1j:1 inversion:1 achievable:1 proportion:6 open:1 simulation:3 covariance:4 idl:1 frigyik:3 moment:3 series:1 past:2 com:3 si:8 gmail:2 must:3 numerical:3 shape:1 enables:1 designed:1 treating:2 generative:1 rudin:1 ith:7 short:1 five:2 mathematical:1 direct:1 differential:2 introduce:1 expected:2 p1:1 frequently:1 olya:1 decreasing:2 company:1 actual:1 increasing:4 totally:1 project:1 estimating:8 bounded:8 provided:1 mass:2 what:2 kind:2 finding:5 nj:1 suite:1 guarantee:1 concave:5 fat:1 wrong:1 classifier:1 scaled:2 control:2 stick:2 sale:10 yn:2 engineering:1 treat:1 limit:1 championed:1 practical:2 woodbury:1 yj:6 practice:3 block:1 implement:1 procedure:1 jan:1 empirical:11 reject:2 convenient:1 boyd:1 word:2 petersen:1 mkj:3 cazzanti:1 cannot:2 equivalent:1 map:2 maximizing:1 destroying:1 straightforward:1 convex:17 manipulates:2 estimator:1 d1:2 regularize:1 vandenberghe:1 classic:1 variation:11 analogous:1 construction:1 kapila:1 us:1 elkan:1 trend:1 robertson:1 jk:1 cut:1 observed:1 electrical:1 capture:2 parameterize:2 solved:1 verifying:1 burstiness:1 pd:11 dempster:1 complexity:2 renormalized:8 motivate:1 solving:1 upon:1 various:1 walter:1 describe:4 london:1 monte:1 outside:2 pearson:2 choosing:1 whose:3 heuristic:2 widely:1 larger:1 say:1 drawing:2 ability:1 cov:2 statistic:1 expounded:1 laird:1 advantage:2 propose:3 product:2 coming:1 realization:3 achieve:2 description:1 seattle:1 intl:1 produce:1 renormalizing:1 generating:6 illustrate:1 nearest:1 ij:1 keywords:1 solves:3 shadow:62 come:2 qd:4 correct:1 stochastic:7 hull:3 material:1 generalization:2 mathematically:1 considered:2 hall:1 normal:1 great:1 mapping:1 puzzle:7 predict:1 dictionary:1 estimation:10 proc:3 pmfs:35 maxm:2 organ:1 create:3 always:1 pn:1 profitably:1 publication:1 corollary:3 derived:3 methodological:1 rank:2 likelihood:17 greatly:2 contrast:1 normalizer:3 digamma:1 equipment:1 sense:3 tech:3 inference:1 dependent:1 entire:1 unlikely:1 spurious:1 arg:10 classification:1 flexible:2 constrained:3 integration:4 smoothing:1 equal:4 once:1 construct:4 washington:4 chapman:1 broad:1 cookbook:1 thin:1 future:1 simplex:18 simplify:1 distibution:1 n1:1 microsoft:2 highly:1 investigate:1 multiply:2 erating:1 analyzed:1 mixture:2 extreme:10 monti:1 poorer:1 integral:2 closer:1 injective:4 incomplete:1 pmf:25 desired:5 re:1 renormalize:1 inconvenient:1 theoretical:1 column:11 modeling:8 maximization:4 vertex:3 subset:12 entry:3 uniform:2 inadequate:1 dependency:2 density:12 peak:1 pbe:1 invertible:1 together:1 dirichlets:1 espresso:2 again:1 squared:2 containing:1 choose:2 bela:1 conf:1 american:2 inefficient:1 derivative:1 rockafellar:1 vi:10 closed:1 analyze:1 trigamma:1 contribution:1 php:1 accuracy:1 variance:1 maximized:1 yield:1 correspond:1 weak:1 bayesian:1 raw:1 pedersen:1 produced:1 carlo:1 explain:1 higherdimensional:1 definition:2 pp:7 minka:2 naturally:1 proof:1 popular:1 knowledge:2 subsection:2 mlj:3 specify:4 april:1 done:1 mar:1 correlation:2 hand:1 lack:1 yihua:1 logistic:1 artifact:1 mode:1 perhaps:1 believe:2 building:1 effect:1 normalized:8 contain:1 true:3 concept:2 evolution:1 hence:5 regularization:1 verify:1 q0:4 nonzero:1 during:2 noted:1 unnormalized:1 generalized:3 bijective:1 hill:1 theoretic:2 complete:1 demonstrate:1 image:1 regularizing:1 common:2 multinomial:6 functional:1 ji:2 discussed:2 association:2 interpret:1 employee:1 refer:1 composition:2 measurement:1 connor:2 cambridge:4 rd:5 pm:6 hp:1 language:5 similarity:5 feb:1 multivariate:1 madsen:1 compound:16 certain:1 rep:3 captured:1 greater:1 impose:2 determine:2 maximize:3 monotonically:5 signal:1 july:1 full:1 desirable:1 multiple:1 reduces:1 rahimi:1 match:1 offer:1 retrieval:1 qi:5 variant:7 basic:1 expectation:4 histogram:9 iteration:1 normalization:1 achieved:1 retail:1 dec:1 addition:1 whereas:1 want:2 rest:2 ascent:1 ample:1 lafferty:1 call:3 bought:1 ee:2 easy:2 independence:4 ronning:2 zi:2 gave:1 restrict:3 det:9 qj:1 pushforward:2 six:4 allocate:1 effort:1 hessian:1 york:2 compositional:3 repeatedly:1 useful:3 tij:2 proportionally:1 generate:3 percentage:1 estimated:10 yy:2 aitchison:4 discrete:1 exthe:1 vol:7 srinivasan:2 drawn:8 changing:1 v1:1 relaxation:1 sum:3 prob:1 parameterized:1 inverse:1 family:2 reasonable:1 reader:1 decide:1 vn:1 draw:4 summarizes:3 bound:2 maya:1 milman:1 occur:1 constraint:1 constrain:1 generates:1 urn:2 department:1 according:1 combination:4 march:1 conjugate:2 across:1 em:17 making:1 restricted:6 resource:2 discus:4 describing:1 needed:1 available:2 hanover:1 apply:3 manufacturer:3 hierarchical:1 v2:1 generic:2 appropriate:1 worthwhile:1 alternative:1 evi:1 original:2 denotes:1 dirichlet:128 include:1 newton:2 liouville:2 society:2 implied:1 question:3 already:1 strategy:1 dependence:6 gradient:3 kth:5 kauchak:1 topic:1 polytope:3 extent:1 discriminant:1 trivial:2 krein:1 assuming:1 code:3 modeled:4 relationship:1 zhai:1 index:1 demonstration:2 difficult:2 design:3 unknown:2 perform:2 sm:19 sold:1 finite:2 descent:2 january:1 variability:1 y1:2 rn:1 smoothed:1 required:1 specified:3 trans:1 andp:2 max:10 including:1 royal:2 event:1 natural:2 force:1 regularized:5 misleading:1 stan:1 ziji:1 prior:2 relative:1 proportional:3 versus:1 foundation:1 downloaded:1 sufficient:1 rubin:1 dq:1 dvi:2 karl:1 summary:1 jth:4 infeasible:1 side:1 taking:2 distributed:1 dimension:1 xn:2 world:2 gram:1 concavity:2 dard:1 made:1 simplified:1 far:1 compact:2 mcgraw:1 ml:2 assumed:2 xi:2 continuous:1 iterative:5 table:5 additionally:1 learn:1 confirmation:1 inventory:1 necessarily:1 domain:7 diag:2 apr:1 arise:4 nallapati:2 x1:2 fig:3 precision:2 breaking:2 third:1 jacobian:1 theorem:2 rk:8 specific:4 gupta:5 evidence:2 workshop:1 restricting:1 adding:1 effectively:1 drew:2 chen:3 entropy:10 garcia:1 simply:2 explore:1 likely:1 forming:1 visual:1 conveniently:1 ordered:2 monotonic:14 corresponds:1 acm:1 ma:1 conditional:1 identity:2 month:4 manufacturing:1 change:3 included:3 specifically:1 lemma:3 called:2 experimental:1 support:21 arises:1 constructive:1 princeton:2 tested:1
3,240
3,935
Large Margin Multi-Task Metric Learning Kilian Q. Weinberger Department of Computer Science and Engineering Washington University in St. Louis St. Louis, MO 63130 [email protected] Shibin Parameswaran Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA 92093 [email protected] Abstract Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to support vector machines (svm) by Evgeniou et al. [15]. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (lmnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multi-task learning. We evaluate the resulting multi-task lmnn on real-world insurance data and speech classification problems and show that it consistently outperforms single-task kNN under several metrics and state-of-the-art MTL classifiers. 1 Introduction Multi-task learning (MTL) [6, 8, 19] refers to the joint training of multiple problems, enforcing a common intermediate parameterization or representation. If the different problems are sufficiently related, MTL can lead to better generalization and benefit all of the tasks. This phenomenon has been examined further by recent papers which have started to build a theoretical foundation that underpins these initial empirical findings [1, 2, 3]. A well-known application of MTL occurs within the realm of speech recognition. The way different people pronounce the same words differs greatly based on their gender, accent, nationality or other individual characteristics. One can view each possible speaker, or clusters of speakers, as different learning problems that are highly related. Ideally, a speech recognition system should be trained only on data from the user it is intended for. However, annotated data is expensive and difficult to obtain. Therefore, it is highly beneficial to leverage the similarities of data sets from different types of speakers while adapting to the specifics of each particular user [13, 16]. One particularly successful instance of multi-task learning is its adaptation to support vector machines (svm) [14, 15]. Support vector machines are arguably amongst the most successful classification algorithms of all times, however their multi-class extensions such as one-vs-all [4] or clever refinements of the loss functions [10, 21] all require at least one weight vector per class label. As a consequence, the MTL adaptation of svm [15] requires all tasks to share an identical set of labels (or require side-information about task dependencies) for meaningful tranfer of knowledge. This is a serious limitation in many domains (binary or non-binary) where different tasks might not share the same classes (e.g. identifying multiple diseases from a particular patient data). Recently, Weinberger et al. introduced Large Margin Nearest Neighbor (lmnn) [20], an algorithm that translates the maximum margin learning principle behind svms to k-nearest neighbor classification (kNN) [9]. Similar to svms, the solution of lmnn is also obtained through a convex optimization problem that maximizes a large margin 1 between input vectors from different classes. However, instead of positioning a separating hyperplane, lmnn learns a Mahalanobis metric. Weinberger et al. show that the lmnn metric improves the kNN classification accuracy to be en par with kernelized svms [20] . One advantage that the kNN decision rule has over hyperplane classifiers is its agnosticism towards the number of class labels of a particular data set. A new test point is classified by the majority label of its k closest neighbors within a known training data set ? additional classes require no special treatment. We follow the intuition of Evgeniou et al. [15] and extend lmnn to the multitask setting. Our algorithm learns one metric that is shared amongst all the tasks and one specific metric unique to each task. We show that the combination is still a well-defined pseudo-metric that can be learned in a single convex optimization problem. We demonstrate on several multi-task settings that these shared metrics significantly reduce the overall classification error. Further, our algorithm tends to outperform multi-task neural networks [6] and svm [15] on tasks with many class-labels. To our knowledge, this paper introduces the first multi-task metric learning algorithm for the kNN rule that explicitly models the commonalities and specifics of different tasks. 2 Large Margin Nearest Neighbor Euclidean Metric This section describes the large margin nearest neighbor algorithm as introduced in [20]. For now, we will focus on a single-task learning framework, with a training set consisting of n examples of dimensionality d, {(xi , yi )}ni=1 , where xi ? Rd and yi ? {1, 2, ..., c}. Here, c denotes the number of classes. The Mahalanobis distance between two inputs xi and xj is defined as q dM (xi , xj ) = (xi ? xj )> M(xi ? xj ), (1) Mahalanobis Metric local neighborhood xi M xi margin Similarly labeled (target neighbor) where M is a symmetric positive definite matrix Differently labeled (impostor) (M  0). The definition in eq. (1) reduces to the EuDifferently labeled (impostor) clidean metric if we set M to the identity matrix, i.e. M = I. The lmnn algorithm learns the matrix M for the Mahalanobis metric1 in eq. (1) explicitly to enFigure 1: An illustration of a data set before and after hance k-nearest neighbor classification. lmnn. The circles represent points of equal distance to the Lmnn mimics the non-continuous and non- vector xi . The Mahalanobis metric rescales directions to differentiable leave-one-out classification error of push impostors further away than target neighbors by a kNN with a convex loss function. The loss function large margin. encourages the local neighborhood around every input to stay ?pure?. Inputs with different labels are pushed away and inputs with a similar label are pulled closer. One of the advantages of lmnn over related work [12, 17] is that the (global) metric is optimized locally, which allows it to work with multi-modal data distributions and encourages better generalization. To achieve this, the algorithm requires k target neighbors to be identified for every input prior to learning, which should become the k nearest neighbors after the optimization. Usually, these are picked with the help of side-information, or in the absence thereof, as the k nearest neighbors within the same class based on Euclidean metric. We use the notation j i to indicate that xj is a target neighbor of xi . Lmnn learns a Mahalanobis metric that keeps each input xi closer to its target neighbors than other inputs with different class labels (impostors) ? by a large margin. For an input xi , target neighbor xj , and impostor xk , this relation can be expressed as a linear inequality constraint with respect to the squared distance d2M (?, ?): d2M (xi , xk ) ? d2M (xi , xj ) ? 1. (2) Eq. (2) is enforced only for the local target neighbors. See Fig. 1 for an illustration. Here, all points on the circles have equal distance from xi . Under the Mahalanobis metric this circle is deformed to an ellipsoid, which causes the impostors (marked as squares) to be further away than the target neighbors. The program (SDP) introduced by [20] moves target neighbors close by minimizing P semidefinite 2 d (x , x ) while penalizing violations of the constraint in eq. (2). The latter is achieved through addij j i M i 1 For simplicity we will refer to pseudo-metrics also as metrics as the distinction has no implications for our algorithm. 2 tive slack variables ?ijk ? 0. If we define a set of triples S = {(i, j, k) : j stated as the SDP shown in Table 1. This optimization problem has O(kn2 ) constraints of type (1) and (2), along with the positive semidefinite constraint of a d ? d matrix M. Hence, standard offthe shelf packages are not particularly suited to solve this SDP. For this paper we use the special purpose subgradient descent solver, developed in [20], which can handle data sets on the order of tens of thousands of samples. As the optimization problem is not sensitive to the exact choice of the tradeoff constant ? [20], we set ? = 1 throughout this paper. min M X j i, yk 6= yi }, the problem can be d2M (xi , xj ) + ? i X ?ijk (i,j,k)?S subject to: (i, j, k) ? S: (1) d2M (xi , xk ) ? d2M (xi , xj ) ? 1 ? ?ijk (2) ?ijk ? 0 (3) M  0. Table 1: Convex optimization problem of lmnn. 3 Multi-Task learning In this section, we briefly review the approach presented by Evgeniou et al. [15] that extends svm to multi-task learning (mt-svm). We assume that we are given T different but related tasks. Each input (xi , yi ) belongs to exactly one of the tasks 1, . . . , T , and we let It be the set of indices such that i ? It if and only if the input-label pair (xi , yi ) belongs to task t. For simplification, throughout this section we will assume a binary classification scenario, in particular yi ? {+1, ?1}. Following the original description of [15], mt-svm learns T classifiers w1 , . . . , wT , where each classifier wt is specifically dedicated for task t. In addition, the authors introduce a global classifier w0 that captures the commonality among all the tasks. An example xi ? It is classified by the rule y?i = sign(x> i (w0 + wt )). The joint optimization problem is to minimize the following cost: min w0 ,...,wT T X ?t kwt k22 + t=0 T X X [1?yi (w0 + wt )> xi ]+ (3) t=1 i?It where [a]+ = max(0, a). The constants ?t ? 0 trade-off the regularization of the various tasks. Note that the relative value between ?0 and the other ?t>0 controls the strength of the connection across tasks. In the extreme case, if ?0 ? +?, then w0 = ~0 and all tasks are decoupled; on the other hand, when ?0 is small and ?t>0 ? +? we obtain wt>0 = ~0 and all the tasks share the same decision function with weights w0 . Although the mt-svm formulation is very elegant, it requires all tasks to share the same class labels. In the remainder of this paper we will introduce an MTL algorithm based on the kNN rule, which does not model each class with its own parameter vector. 4 Multi-Task Large Margin Nearest Neighbor In this section we combine large margin nearest neighbor classification from section 2 with the multi-task learning paradigm from section 3. We follow the MTL setting with T learning tasks. Our goal is to learn a metric dt (?, ?) for each of the T tasks that minimizes the kNN leave-one-out classification error. Inspired by the methodology of the previous section, we model the commonalities between various tasks through a shared Mahalanobis metric with M0  0 and the task-specific idiosyncrasies with additional matrices M1 , . . . MT  0. We define the distance for task t as q dt (xi , xj ) = (xi ? xj )> (M0 + Mt )(xi ? xj ). (4) Intuitively, the metric defined by M0 picks up general trends across multiple data sets and Mt>0 specialize the metric further for each particular task. See Fig. 2 for an illustration. If we constrain the matrices Mt to be positive semi-definite (i.e. Mt  0), then eq. (4) will result in a well defined pseudo-metric, as we show in section 4.1. An important aspect of multi-task learning is the appropriate coupling of the multiple learning tasks. We have to ensure that the learning algorithm does not put too much emphasis onto the shared parameters M0 or the individual 3 Task 1 Euclidean Metric x1i Joint Metric M0 Individual Metrics M0 +M1 x1i x1i Similarly labeled (target neighbor) Task 2 Differently labeled (impostor) x2i M0 x2i M0 +M2 x2i Figure 2: An illustration of mt-lmnn. The matrix M0 captures the communality between the several tasks, whereas Mt for t > 0 adds the task specific distance transformation. parameters M1 , . . . , MT . To ensure this balance, we use the regularization term stated below: min M0 ,...,MT ?0 kM0 ? Ik2F + T X ?t kMt k2F . (5) t=1 The trade-off parameter ?t controls the regularization of Mt for all t = 0, 1, . . . , T . If ?0 ? ?, the shared metric M0 reduces to the plain Euclidean metric and if ?t>0 ? ?, the task-specific metrics Mt>0 become irrelevant zero matrices. Therefore, if ?t>0 ? ? and ?0 is small, we learn a single metric M0 across all tasks. In this case we want the result to be equivalent to applying lmnn on the union of all data sets. In the other extreme case, when ?0 = 0 and ?t>0 ? ?, we want our formulation to reduce to T independent lmnn algorithms. Similar to the set of triples S defined in section 2, let St be the set of triples restricted to only vectors for task i, yk 6= yi }. We can combine the regularizer in eq.( 5) with the objective t, i.e., St = {(i, j, k) ? I3t : j of lmnn applied to each of the T tasks. To ensure well-defined metrics, we add constraints that each matrix is positive semi-definite, i.e. Mt  0 (see next paragraph for more details). We refer to the resulting algorithm as multi-task large margin nearest neighbor (mt-lmnn). The optimization problem is shown in Table 2 and can be solved efficiently after some modifications to the special-purpose solver presented by Weinberger et al. [20] 4.1 Theoretical Properties In this section we verify that our resulting distances are guaranteed to be well-defined pseudo-metrics and that the optimization is convex. Theorem 1 If Mt  0 for all t = 0, . . . T then the distance functions dt (?, ?), as defined in eq.( 4), are well-defined pseudo-metrics for all 0 ? t ? T . The proof of Theorem 1 is completed in two steps: First, as the cone of positive semi-definite matrices is convex, any linear combination of positive semidefinite matrices is also positive semidefinite. This implies that dt (?, ?) is non-negative, and it is also trivially symmetric. The second part of the proof utilizes the fact that any positive semidefinite matrix M, can be decomposed as M = L> L, for some matrix L ? Rd?d . It therefore follows that there exists some matrix Lt such that L> t Lt = M0 + Mt . Hence we can rephrase eq.( 4) as q dt (xi , xj ) = (xi ? xj )> L> (6) t Lt (xi ? xj ), which is equivalent to the Euclidean distance after the transformation xi ? Lt xi . It follows that eq.( 6) preserves the triangular inequality. This completes the requirements for a pseudo-metric. If Lt is full rank, i.e. M0 + Mt is strictly positive definite, then it also fulfills identity of indiscernibles, i.e., d(xi , xj ) = 0 if and only if xi = xj and d(?, ?) is a metric. 4 min ?0 kM0 ? Ik2F + M0 ,...,MT T X ? ? ??t kMt k2F + t=1 X d2t (xi , xj ) + (i,j)?It ,j i X ?ijk ? (i,j,k)?St subject to: ?t, ?(i, j, k) ? St : (1) d2t (xi , xk ) ? d2t (xi , xj ) ? 1 ? ?ijk (2) ?ijk ? 0 (3) M0 , M1 , . . . , MT  0. Table 2: Convex optimization problem of mt-lmnn. One of the advantages of lmnn over alternative distance metric learning algorithms, for example NCA [17], is that it can be stated as a convex optimization problem. This allows the global solution to be found efficiently with special purpose solvers [20] or for very large data sets in an online relaxation [7]. It is therefore important to show that our new formulation preserves convexity. Theorem 2 The mt-lmnn optimization problem in Table 2 is convex. Constraints of type (2) and (3) are standard linear and positive-semidefinite constraints, which are known to be convex [5]. Convexity remains to be shown for constraints of type (1) and the objective. Both access the matrices Mt exclusively in terms of the squared distance d2 (?, ?). This can be expressed as > > d2 (xi , xj ) = trace(M0 vij vij ) + trace(Mt vij vij ), (7) where vij = (xi ? xj ). Eq.( 7) is linear in terms of the matrices Mt and it follows that the constraints of type (1) are also linear and therefore trivially convex. Similarly, it follows that all terms in the objective are also linear with the exception of the Frobenius norms in the regularization term. The latter term is quadratic (kMt k2F = trace(M> t Mt )) and therefore convex with respect to Mt . The regularization of M0 can be expanded as trace(M> M ? 2M0 + I) which has one quadratic and one linear term. The sum of convex functions is 0 0 convex [5], hence this concludes the proof. 5 Results We evaluate mt-lmnn on the Isolet spoken alphabet recognition2 and CoIL 2000 dataset3 . We first provide a brief overview of the two datasets and then present results in various multi-task and domain adaptation settings. The Isolet dataset was collected from 150 speakers uttering all characters in the English alphabet twice, i.e., each speaker contributed 52 training examples (in total 7797 examples4 ). The task is to classify which letter has been uttered based on several acoustic features ? spectral coefficients, contour-, sonorant- and post-sonorant features. The exact feature description can be found in [16]. The speakers are grouped into smaller sets of 30 similar speakers, giving rise to 5 disjoint subsets called isolet1-5. This representation of Isolet lends itself naturally to the multi-task learning regime. We treat each of the subsets as its own classification task (T = 5) with c = 26 labels. The five tasks differ because the groups of speakers vary greatly in the way they utter the characters of the English alphabets. They are also highly related to each other because all the data is collected from the same utterances (the English alphabets). To remove low variance noise and to speed up computation time we preprocess the Isolet data with PCA [18] and project it onto its leading principal components that capture 95% of the data variance reducing the dimensionality from 617 to 169. The CoIL dataset contains information of customers of an insurance company. The customer information consists of 86 variables including product usage and socio-demographic data. The training set contains 5822 and the test set 4000 examples. Out of the 86 variables, we used 6 categorical features to create different classification problems, leaving the remaining 80 features as the joint data set. Our target variables consist of attributes 1, 4, 5, 6, 44 and 86, 2 Available for download from the UCI Machine Learning Repository. Available for download at http://kdd.ics.uci.edu/databases/tic/tic.html 4 Three examples are historically missing. 3 5 Isolet 1 2 3 4 5 Avg Euc 13.30% 18.62% 21.44% 24.42% 18.91% 19.34% U-lmnn 6.05% 6.53% 8.59% 8.37% 7.30% 7.37% st-lmnn 5.32% 5.03% 10.09% 9.39% 7.69% 7.51% mt-lmnn 3.89% 3.17% 6.99% 6.31% 5.58% 5.19% st-net 4.74 % 4.62 % 6.73 % 7.95 % 5.74 % 5.96 % mt-net 4.52 % 3.81 % 6.92 % 6.51 % 5.61 % 5.48 % st-svm 8.75% 9.62% 13.81% 13.62% 13.71% 11.90% mt-svm 5.99% 5.99% 7.30% 8.39% 7.82% 7.10% Table 3: Error rates on label-compatible Isolet tasks when tested with task-specific train sets. which indicate customer subtypes, customer age bracket, customer occupation, a discretized percentage of Roman Catholics in that area, contribution from a third party insurance and the last feature is a binary value that signifies if the customer has a caravan insurance policy. The tasks have a different number of output labels but they share the same input data. Each Isolet subset (task) was divided into randomly selected 60/20/20 splits of train/validation/test sets. We randomly picked 20% of the CoIL training examples and set them aside for validation purposes. The results were averaged over 10 runs in both cases. The validation subset was used for model selection for mt-lmnn, i.e. choosing the regularization constants ?t and the number of iterations for early stopping. Although our model allows different weights ?t for each task, throughout this paper we only differentiated between ?0 and ? = ?t>0 . The neighborhood size k was fixed to k = 3, which is the setting recommended in the original lmnn publication [20]. For competing algorithms, we performed a thorough parameter sweep and reported the best test set results (thereby favoring them over our method). These two datasets capture the essence of an ideal mt-lmnn application area. Our algorithm is very effective when the feature space is dense and when dealing with multi-label tasks with or without the same set of output labels. This is demonstrated in the first subsection of results. The second subsection provides a brief demonstration of the use of mt-lmnn in the domain adaptation (or cold start) scenario. 5.1 Multi-task Learning We categorized the multi-task learning setting into two different scenarios: label-compatible MTL and labelincompatible MTL. In the label-compatible MTL scenario, all the tasks share the same label set. The labelincompatible scenario arises when applying MTL to a group of multi-class classification tasks that do not share the same set of labels. We demonstrate the applicability and effectiveness of mt-lmnn in both these scenarios in the following sub-sections. Label-Compatible Multi-task Learning The experiments in this setting were conducted on the Isolet data, where isolet1-5 are the 5 tasks and all of them share the same 26 labels. We compared the performance of our mtlmnn algorithm with different baselines in Isolet Euc U-lmnn st-lmnn mt-lmnn table 3. The first 3 algorithms are kNN 1 9.65% 4.71% 5.51% 4.13% classifiers using different metrics. ?Euc? 2 14.01% 5.19% 5.29% 3.94% represents the Euclidean metric, ?U-lmnn? 3 11.06% 5.32% 7.14% 3.85% is the metric obtained from lmnn trained 4 12.28% 5.03% 7.89% 4.49% on the union of the training data of all 5 10.67% 4.17% 7.11% 3.65% tasks (essentially ?pooling? all the data Avg 11.53% 4.88% 6.59% 4.01% and ignoring the multi-task aspect), ?stlmnn? is single-task lmnn trained independent of other tasks. As additional compari- Table 4: Error rates when tested with the union of train sets from all son we have also included results from lin- the tasks. ear single-task and multi-task support vector machine [15], denoted as ?st-svm? and ?mt-svm? and non-linear single-task and multi-task neural networks (48 hidden layers) [6] denoted as ?st-net? and ?mt-net? respectively. A special case arises in terms of the kNN based classifiers in the label-compatible scenario: during the actual classification step, regardless what metric is used, the kNN training data set can either consist of only task specific 6 Task 1 2 3 4 5 Avg Euc 11.25% 10.52% 14.79% 14.79% 9.38% 12.15% U-lmnn 4.27% 3.02% 6.25% 6.25% 2.71% 4.50% mt-lmnn 3.44% 2.71% 5.83% 5.52% 1.77 % 3.85% st-lmnn 4.48% 3.96% 6.04% 6.46% 2.71% 4.73% st-net 3.92% 2.50% 6.67% 5.83% 1.58% 4.10% mt-net 3.43% 2.78% 6.39% 5.93% 1.67% 4.04% st-svm 7.08% 6.83% 9.58% 9.83% 6.17% 7.90% Table 5: Error rates on Isolet label-incompatible tasks with task-specific train sets. Task 1 2 3 4 5 6 Avg # classes 40 6 10 10 4 2 Euc 24.65% 6.78% 18.48% 7.83 % 33.18 % 9.25% 16.70% st-lmnn 13.67% 5.72% 13.28% 6.05% 8.23 % 9.12% 9.35% mt-lmnn 12.75% 5.12% 11.06% 6.00 % 7.54 % 9.10% 8.60% st-net 47.45% 17.25% 23.12% 19.95% 3.63% 5.95% 19.56% mt-net 47.05% 19.35% 27.80% 17.40% 3.63% 6.00% 20.20% st-svm 55.68% 36.30% 40.98% 32.98% 3.63% 5.95% 29.25% Table 6: Error rates on CoIL label-incompatible tasks. See text for details. training data or the pooled data from all tasks. The kNN results obtained from using pooled training sets at the classification phase is shown in table 4. Both sets of results, in table 3 and 4, show that mt-lmnn obtains considerable improvement over its single-task counterparts on all 5 tasks and generally outperforms the other multi-task algorithms based on neural networks and support vector machines. Label-Incompatible Multi-task Learning To demonstrate mt-lmnn?s ability to learn multiple tasks having different sets of class labels, we ran experiments on the CoIL dataset and on artificially incompatible versions of Isolet tasks. Note that in this setting, mt-svm cannot be used because there is no intuitive way to extend it to label-incompatible multi-class multi-label MTL setting. Also, U-lmnn cannot be used with CoIL data tasks since all of them share the same input. For each original subset of Isolet we picked 10 labels at random and reduced the dataset to only examples with these labels (resulting in 600 data points per set and different sets of output labels). Table 5 shows the results of the kNN algorithm under the various metrics along with single-task and multi-task versions of svm and neural networks on these tasks. Mt-lmnn yields the lowest average performance across all tasks. The classification error rates on CoIL data tasks are shown in Table 6. The multi-task neural network and svm have a hard time with most of the tasks and, at times perform worse than their single-task versions. Once again, mt-lmnn improves upon its single task counterparts demonstrating the sharing of knowledge between tasks. Both svm and neural networks perform very well on the tasks with the least number of classes, whereas mt-lmnn does very well in tasks with many classes (in particular 40-way classification of task 1). 5.2 Domain Adaptation Domain adaptation attempts to learn a severely undersampled target domain, with the help of source domains with plenty of data, that may not have the same sample distribution as that of the target. For instance, in the context of speech recognition, one might have a lot of annotated speech recordings from a set of lab volunteers but not much from the client who will use the system. In such cases, we would like the learned classifier to gracefully adapt its recognition / classification rule to the target domain as more data becomes available. Unlike the previous setting, we now have one specific target task which can be heavily under-sampled. We evaluate the domain adaptation capability of mt-lmnn with isolet1-4 as the source and isolet5 as the target domain across varying amounts of available labeled target data. The classification errors of kNN under the mt-lmnn and U-lmnn metrics are shown in Figure 3. 7 11 Test error rate in % In the absence of any training data from 10 isolet5 (also referred to as the cold-start 9 scenario), we used the global metric M0 learned by mt-lmnn on tasks isolet1-4. U8 EUC lmnn and mt-lmnn global metric perform U-LMNN 7 much better than the Euclidean metric, with MT-LMNN 6 U-lmnn giving slightly better classification. With the availability of more data charac5 teristic of the new task, the performance 4 of mt-lmnn improves much faster than U3 lmnn. Note that the Euclidean error actu0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 ally increases with more target data, preFraction of isolet5 used for training sumably because utterances from the same speaker might be close together in Eu- Figure 3: mt-lmnn, U-lmnn and Euclidean test error rates (%) in an clidean space even if they are from different unseen task with different sizes of train set. classes ? leading to additional misclassifications. 6 Related Work Caruana was the first to demonstrate results on multi-task learning for k-nearest neighbor regression and locally weighted averaging [6]. The multi-task aspect of their work focused on finding common feature weights across multiple, related tasks. In contrast, our work focuses on classification and learns different metrics with shared components. Previous work on multi-task learning largely focused on neural networks [6, 8], where a hidden layer is shared between various tasks. This approach is related to our work as it also learns a joint representation across tasks. It differs in the way classification and the optimization are performed. Mt-lmnn uses the kNN rule and can be expressed as a convex optimization problem with the accompanying convergence guarantees. Most recent work in multi-task learning focuses on linear classifiers [11, 15] or kernel machines [14]. Our work was influenced by these publications especially in the way the decoupling of joint and task-specific parameters is performed. However, our method uses a different optimization and learns metrics rather than separating hyperplanes. 7 Conclusion In this paper we introduced a novel multi-task learning algorithm, mt-lmnn. To our knowledge, it is the first metric learning algorithm that embraces the multi-task learning paradigm that goes beyond feature re-weighting for pooled training data. We demonstrated the abilities of mt-lmnn on real-world datasets. Mt-lmnn consistently outperformed single-task metrics for kNN in almost all of the learning settings and obtains better classification results than multi-task neural networks and support-vector machines. Addressing a major limitation of mt-svm, mt-lmnn is applicable (and effective) on multiple multi-class tasks with different sets of classes. This MTL framework can also be easily adapted for other metric learning algorithms including the online version of lmnn [7]. A further research extension is to incorporate known structure by introducing additional sub-global metrics that are shared only by a strict subset of the tasks. The nearest neighbor classification rule is a natural fit for multi-task learning, if accompanied with a suitable metric. By extending one of the state-of-the-art metric learning algorithms to the multi-task learning paradigm, mt-lmnn provides a more integrative methodology for metric learning across multiple learning problems. Acknowledgments The authors would like to thank Lawrence Saul for helpful discussions. This research was supported in part by the UCSD FWGrid Project, NSF Research Infrastructure Grant Number EIA-0303622. 8 References [1] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. Journal of Machine Learning Research, 4:83?99, 2003. [2] S. Ben-David, J. Gehrke, and R. Schuller. A theoretical framework for learning from a pool of disparate data sources. In KDD, pages 443?449, 2002. [3] S. Ben-David and R. Schuller. Exploiting task relatedness for mulitple task learning. In COLT, pages 567? 580, 2003. [4] B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pages 144?152. ACM New York, NY, USA, 1992. [5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [6] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997. [7] G. Chechik, U. Shalit, V. Sharma, and S. Bengio. An online algorithm for large scale image similarity learning. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 306?314. 2009. [8] R. Collobert and J. Weston. A unified architecture for NLP: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160?167. ACM New York, NY, USA, 2008. [9] T. Cover and P. Hart. Nearest neighbor pattern classification. In IEEE Transactions in Information Theory, IT-13, pages 21?27, 1967. [10] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2:265?292, 2002. [11] H. Daum?e. Frustratingly easy domain adaptation. In Annual Meeting-Association for Computational Linguistics, volume 45, page 256, 2007. [12] J. Davis, B. Kulis, P. Jain, S. Sra, and I. Dhillon. Information-theoretic metric learning. Proceedings of the 24th international conference on Machine learning, 2007. [13] V. Digalakis, D. Rtischev, and L. Neumeyer. Fast speaker adaptation using constrained estimation of Gaussian mixtures. IEEE Trans. on Speech and Audio Processing, pages 357?366, 1995. [14] T. Evgeniou, C. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6(1):615, 2006. [15] T. Evgeniou and M. Pontil. Regularized multi?task learning. In KDD, pages 109?117, 2004. [16] M. A. Fanty and R. Cole. Spoken letter recognition. In Advances in Neural Information Processing Systems 4, page 220. MIT Press, 1990. [17] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 513? 520, Cambridge, MA, 2005. MIT Press. [18] I. T. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1986. [19] A. Quattoni, C. X., C. M., and D. T. A projected subgradient method for scalable multi-task learning. Massachusetts Institute of Technology, Technical Report, 2008. [20] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research, 10:207?244, 2009. [21] J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In ESANN, page 219, 1999. 9
3935 |@word multitask:4 deformed:1 repository:1 briefly:1 version:4 kulis:1 norm:1 mtlmnn:1 d2:2 integrative:1 pick:1 thereby:1 initial:1 contains:2 exclusively:1 outperforms:2 goldberger:1 kdd:3 remove:1 v:1 aside:1 selected:1 parameterization:1 xk:4 infrastructure:1 provides:2 hyperplanes:2 five:1 along:2 become:2 specialize:1 consists:1 combine:2 paragraph:1 introduce:2 sdp:3 multi:50 discretized:1 inspired:1 lmnn:67 relying:1 decomposed:1 company:1 salakhutdinov:1 actual:1 solver:3 indiscernibles:1 becomes:2 project:2 notation:1 maximizes:1 lowest:1 what:1 underpins:1 tic:2 minimizes:1 bakker:1 developed:1 spoken:2 finding:2 transformation:2 unified:1 guarantee:1 pseudo:6 thorough:1 every:2 socio:1 exactly:1 classifier:10 control:2 grant:1 louis:2 arguably:1 positive:10 before:1 engineering:2 local:3 treat:1 tends:1 consequence:1 severely:1 might:3 emphasis:1 twice:1 examined:1 averaged:1 pronounce:1 agnosticism:1 unique:1 nca:1 acknowledgment:1 union:3 impostor:7 definite:5 differs:2 euc:6 cold:2 pontil:2 area:2 empirical:1 adapting:1 significantly:1 boyd:1 chechik:1 word:1 wustl:1 refers:1 onto:2 clever:1 close:2 selection:1 cannot:2 put:1 context:1 applying:2 equivalent:2 customer:6 missing:1 demonstrated:2 uttered:1 go:1 regardless:1 williams:1 convex:16 focused:2 simplicity:1 identifying:1 pure:1 m2:1 rule:8 isolet:12 vandenberghe:1 handle:1 isolet5:3 diego:1 target:18 heavily:1 user:2 exact:2 us:2 trend:1 recognition:6 expensive:1 particularly:2 labeled:6 database:1 electrical:1 capture:4 solved:1 thousand:1 km0:2 culotta:1 kilian:2 eu:1 trade:2 yk:2 disease:1 intuition:1 ran:1 convexity:2 ideally:1 trained:3 upon:1 easily:1 joint:6 differently:2 various:5 regularizer:1 alphabet:4 train:5 jain:1 fast:1 effective:2 neighborhood:3 choosing:1 solve:1 triangular:1 ability:2 knn:16 unseen:1 itself:1 online:3 advantage:3 differentiable:1 net:8 product:1 fanty:1 adaptation:9 remainder:1 uci:2 achieve:1 roweis:1 description:2 frobenius:1 intuitive:1 exploiting:1 convergence:1 cluster:1 requirement:1 extending:2 leave:2 ben:2 help:2 coupling:1 nearest:16 eq:10 esann:1 indicate:2 implies:1 differ:1 direction:1 annotated:2 attribute:1 require:4 generalization:2 subtypes:1 extension:3 strictly:1 accompanying:1 sufficiently:1 around:1 ic:1 lawrence:1 algorithmic:1 mo:1 m0:20 u3:1 vary:1 commonality:3 early:1 major:1 purpose:4 estimation:1 outperformed:1 applicable:1 label:31 sensitive:1 cole:1 grouped:1 create:1 gehrke:1 weighted:1 mit:2 gaussian:1 rather:1 shelf:1 varying:1 publication:2 focus:3 improvement:1 consistently:2 rank:1 greatly:2 contrast:1 baseline:1 parameswaran:1 helpful:1 stopping:1 kernelized:1 relation:1 favoring:1 hidden:2 overall:1 classification:26 among:1 html:1 denoted:2 colt:1 proposes:1 art:2 special:5 constrained:1 equal:2 once:1 evgeniou:5 having:1 washington:1 identical:1 represents:1 k2f:3 plenty:1 mimic:1 report:1 serious:1 roman:1 randomly:2 preserve:2 kwt:1 individual:3 intended:1 consisting:1 phase:1 mulitple:1 attempt:1 highly:3 insurance:4 introduces:1 violation:1 extreme:2 bracket:1 semidefinite:6 mixture:1 behind:1 implication:1 closer:2 dataset3:1 decoupled:1 euclidean:9 circle:3 re:1 shalit:1 theoretical:3 instance:2 classify:1 cover:1 caruana:2 signifies:1 cost:1 applicability:1 addressing:1 subset:6 introducing:1 successful:2 conducted:1 too:1 reported:1 dependency:1 offthe:1 st:18 international:2 stay:1 off:2 pool:1 together:1 w1:1 squared:2 again:1 ear:1 idiosyncrasy:1 worse:1 leading:2 accompanied:1 pooled:3 availability:1 coefficient:1 rescales:1 explicitly:3 collobert:1 performed:3 view:1 picked:3 lot:1 lab:1 start:2 capability:1 contribution:1 minimize:1 square:1 ni:1 accuracy:1 variance:2 characteristic:1 efficiently:2 who:1 yield:1 preprocess:1 largely:1 bayesian:1 published:1 classified:2 quattoni:1 influenced:1 sharing:1 definition:1 thereof:1 dm:1 naturally:1 proof:3 sampled:1 dataset:4 treatment:1 massachusetts:1 realm:1 knowledge:4 improves:4 dimensionality:2 subsection:2 dt:5 mtl:15 follow:2 modal:1 methodology:2 wei:1 formulation:4 eia:1 hand:1 ally:1 kn2:1 accent:1 usage:1 usa:2 k22:1 verify:1 counterpart:2 hence:3 regularization:6 symmetric:2 dhillon:1 d2t:3 mahalanobis:8 during:1 encourages:2 essence:1 speaker:10 davis:1 prominent:1 theoretic:1 demonstrate:4 dedicated:1 image:1 novel:1 recently:2 common:2 mt:63 overview:1 volume:1 extend:2 association:1 m1:4 refer:2 cambridge:2 rd:2 nationality:1 trivially:2 heskes:1 similarly:3 kmt:3 access:1 similarity:2 add:2 closest:1 own:3 recent:2 jolla:1 belongs:2 irrelevant:1 scenario:8 verlag:1 inequality:2 binary:4 meeting:1 yi:8 additional:5 sharma:1 paradigm:4 recommended:1 semi:3 multiple:10 full:1 reduces:2 positioning:1 faster:1 adapt:1 technical:1 lin:1 divided:1 hart:1 post:1 prediction:1 scalable:1 regression:1 patient:1 metric:57 essentially:1 volunteer:1 iteration:1 represent:1 kernel:3 achieved:1 addition:1 whereas:2 want:2 addressed:1 completes:1 leaving:1 source:3 unlike:1 strict:1 subject:2 pooling:1 elegant:2 recording:1 lafferty:1 effectiveness:1 leverage:1 ideal:1 intermediate:1 split:1 bengio:2 easy:1 xj:21 fit:2 misclassifications:1 architecture:1 identified:1 competing:1 reduce:2 tradeoff:1 translates:1 multiclass:1 pca:1 speech:6 york:3 cause:1 deep:1 generally:1 amount:1 locally:2 ten:1 svms:3 reduced:1 http:1 outperform:1 percentage:1 nsf:1 sign:1 disjoint:1 per:2 group:2 demonstrating:1 penalizing:1 subgradient:2 relaxation:1 cone:1 sum:1 enforced:1 run:1 package:1 letter:2 extends:2 throughout:3 catholic:1 almost:1 guyon:1 utilizes:1 decision:3 incompatible:5 pushed:1 layer:2 guaranteed:1 simplification:1 quadratic:2 annual:2 strength:1 adapted:1 constraint:9 constrain:1 aspect:3 speed:1 min:4 expanded:1 embrace:1 department:2 combination:2 beneficial:1 describes:1 across:8 character:2 smaller:1 son:1 slightly:1 modification:1 intuitively:1 restricted:2 ik2f:2 isolet1:4 remains:1 slack:1 jolliffe:1 singer:1 demographic:1 available:4 away:3 appropriate:1 spectral:1 differentiated:1 neighbourhood:1 alternative:2 weinberger:5 original:3 denotes:1 remaining:1 ensure:3 clustering:1 completed:1 nlp:1 linguistics:1 daum:1 giving:2 build:1 especially:1 sweep:1 move:1 objective:3 micchelli:1 occurs:1 amongst:2 lends:1 distance:12 thank:1 separating:3 majority:1 w0:6 gracefully:1 collected:2 d2m:6 enforcing:1 index:1 illustration:4 ellipsoid:1 minimizing:1 balance:1 demonstration:1 difficult:1 trace:4 stated:3 negative:1 rise:1 disparate:1 implementation:1 policy:1 contributed:1 perform:3 datasets:3 descent:1 hinton:1 ucsd:2 download:2 introduced:4 tive:1 pair:1 david:2 optimized:1 connection:1 rephrase:1 california:1 acoustic:1 learned:3 distinction:1 boser:1 trans:1 beyond:1 usually:1 below:1 pattern:2 regime:1 program:1 max:1 including:2 suitable:1 natural:2 client:1 digalakis:1 regularized:1 undersampled:1 metric1:1 schuller:2 x2i:3 brief:2 historically:1 technology:1 started:1 concludes:1 categorical:1 utterance:2 text:1 prior:1 review:1 relative:1 occupation:1 loss:3 par:1 limitation:2 triple:3 utter:1 foundation:1 age:1 validation:3 principle:1 editor:2 vij:5 share:10 compatible:5 supported:1 last:1 english:3 side:2 pulled:1 institute:1 neighbor:27 saul:3 fifth:1 benefit:1 plain:1 world:2 contour:1 author:2 uttering:1 refinement:1 san:1 avg:4 projected:1 party:1 transaction:1 obtains:2 relatedness:1 keep:1 dealing:1 global:6 xi:37 continuous:1 frustratingly:1 table:14 learn:4 ca:1 inherently:2 ignoring:1 decoupling:1 sra:1 schuurmans:1 bottou:1 artificially:1 domain:11 dense:1 noise:1 categorized:1 fig:2 referred:1 en:1 ny:2 sub:2 x1i:3 watkins:1 third:1 weighting:1 learns:8 sonorant:2 theorem:3 specific:11 gating:1 svm:20 exists:1 consist:2 workshop:1 vapnik:1 push:1 margin:15 suited:1 lt:5 expressed:3 springer:1 gender:1 acm:2 ma:1 coil:7 weston:2 identity:2 marked:1 goal:1 towards:1 clidean:2 u8:1 shared:9 absence:2 considerable:1 hard:1 included:1 specifically:1 reducing:1 hyperplane:2 wt:6 averaging:1 principal:2 total:1 called:1 la:1 ijk:7 meaningful:1 exception:1 support:8 people:1 latter:2 fulfills:1 arises:2 crammer:1 incorporate:1 evaluate:3 audio:1 tested:2 phenomenon:1
3,241
3,936
Relaxed Clipping: A Global Training Method for Robust Regression and Classification Yaoliang Yu, Min Yang, Linli Xu, Martha White, Dale Schuurmans University of Alberta, Dept. Computing Science, Edmonton AB T6G 2E8, Canada {yaoliang,myang2,linli,whitem,dale}@cs.ualberta.ca Abstract Robust regression and classification are often thought to require non-convex loss functions that prevent scalable, global training. However, such a view neglects the possibility of reformulated training methods that can yield practically solvable alternatives. A natural way to make a loss function more robust to outliers is to truncate loss values that exceed a maximum threshold. We demonstrate that a relaxation of this form of ?loss clipping? can be made globally solvable and applicable to any standard loss while guaranteeing robustness against outliers. We present a generic procedure that can be applied to standard loss functions and demonstrate improved robustness in regression and classification problems. 1 Introduction Robust statistics is a well established field that analyzes the sensitivity of common estimators to outliers and provides alternative estimators that achieve improved robustness [11, 13, 17, 23]. Outliers are understood to be observations that have been corrupted, incorrectly measured, mis-recorded, drawn under different conditions than those intended, or so atypical as to require separate modeling. The main goal of classical robust statistics is to make estimators invariant, or nearly invariant, to arbitrary changes made to a non-trivial fraction of the sample data?a goal that is equally relevant to machine learning research given that data sets are often collected with limited or no quality control, making outliers ubiquitous. Unfortunately, the state-of-the-art in robust statistics relies on non-convex training criteria that have yet to yield efficient global solution methods [13, 17, 23]. Although many robust regression methods have been proposed in the classical literature, Mestimators continue to be a dominant approach [13, 17]. These correspond to the standard machine learning approach of minimizing a sum of prediction errors under a given loss function (assuming a fixed scaling). M-estimation is reasonably well understood, analytically tractable, and provides a simple framework for trading off between robustness against outliers and data efficiency on inliers [13, 17]. Unfortunately, robustness in this context comes with a cost: when minimizing a convex loss, even a single data point can dominate the result. That is, any (non-constant) convex loss function exhibits necessarily unbounded sensitivity to even a single outlier [17, ?5.4.1]. Although unbounded sensitivity can obviously be mitigated by imposing prior bounds on the domain and range of the data [5, 6], such is not always possible in practice. Instead, the classical literature achieves bounded outlier sensitivity by considering redescending loss functions (see [17, ?2.2] for a definition), or more restrictively, bounded loss functions, both of which are inherently non-convex. Robust regression has also been extensively investigated in computer vision [2, 26], where a similar conclusion has been reached that bounded loss functions are necessary to counteract the types of outliers created by edge discontinuities, multiple motions, and specularities in image data. For classification the story is similar. The attempt to avoid outlier sensitivity has led many to propose bounded loss functions [8, 15, 18, 19, 25] to replace the standard convex, unbounded losses deployed in support vector machines and boosting [9] respectively. In fact, [16] has shown that minimizing 1 any convex margin loss cannot achieve robustness to random misclassification noise. The conclusion reached in the classification literature, as in the regression literature, is therefore that non-convexity is necessary to ensure robustness against outliers?creating an apparent dilemma: one can achieve global training via convexity or outlier robustness via boundedness, but not both. In this paper we present a counterpoint to these pessimistic conclusions. In particular, we present a general model for bounding any convex loss function, via a process of ?loss clipping?, that ensures bounded sensitivity to outliers. Although the resulting optimization problem is not, by itself, convex, we demonstrate an efficient convex relaxation and rounding procedure that guarantees bounded response to data?a guarantee that cannot be established for any convex loss minimization on its own. The approach we propose is generic and can be applied to any standard loss function, be it for regression or classification. Our work is inspired by a number of studies that have investigated robust estimators in computer vision and machine learning [2, 26, 27, 30]. However, these previous attempts were either hampered by local optimization or restricted to special cases; none had guarantees of global training and outlier insensitivity. Before proceeding it is important to realize that there are many alternative conceptions of ?robustness? in the literature that do not correspond to the notion we are investigating. For example, work on ?robust optimization? [28, 29] considers minimizing the worst case loss achieved given prespecified bounds on the maximum data deviation that will be considered. Although interesting, these results do not directly bear on the question at hand since we explicitly do not bound the magnitude of the outliers (i.e. the degree of leverage [23, ?1.1], nor the size of response deviations). Another notion of robustness is algorithmic stability under leave-one-out perturbation [3]. Although loosely related, algorithmic stability addresses the analysis of given learning procedures rather than describing how a stable algorithm might be generally achieved in the presence of arbitrary outliers. We also do not focus on asymptotic or infinitesimal notions from robust statistics, such as influence functions [11], nor impose boundedness assumptions on the domain and range of the data or the predictor [5, 6]. 2 Background We consider the standard supervised setting where one is given an input matrix X and output targets y, with the goal of learning a predictor h : ?m ? ?. Each row of X gives the feature representation for one training example, denoted Xi: , with corresponding target yi . We will assume the predictor can be written as a generalized linear model; that is, the predictions are given by y?i = f (Xi: ?) for a fixed transfer function f (possibly identity) and a vector of parameters ?. For training, we will consider the standard L2 regularized loss minimization problem n X ? min k?k22 + L(yi , y?i ) ? 2 i=1 = n X ? min k?k22 + L(yi , f (Xi: ?)) ? 2 i=1 (1) where L denotes the loss function, ? is the regularization constant, and n denotes the number of training examples. Normally the loss function L is chosen to be convex in ? so that the minimization problem can be solved efficiently. Although convexity is important for computational tractability, it has the undesired side-effect of causing unbounded outlier sensitivity, as mentioned. Obviously, the severity of the problem will range from minimal to extreme depending on the nature of the distribution over (x, y). Nevertheless, our goal in this paper will be to eliminate unbounded sensitivity for convex loss functions while retaining a scalable computational approach.1 Standard Convex Loss Functions: Our general construction applies to arbitrary convex losses, but we will demonstrate our methods on standard loss functions employed in regression and classification. A standard example is Bregman divergences, which are defined by taking a strongly convex differentiable potential ? then taking the difference between the potential and its first order Taylor approximation, obtaining a loss L? (? y ky) = ?(? y ) ? ?(y) ? ?(y)(? y ? y), where ?(y) = ?? (y) [1, 14]. Several natural loss functions can be defined this way, including least squares L? (? y ky) = (? y ? y)2 /2, using the potential ?(y) = y 2 /2, and forward KL-divergence L? (? y ky) = y? y y? ln y + (1 ? y?) ln 1?? , using the potential ?(y) = y ln y + (1 ? y) ln(1 ? y) for 0 ? y ? 1. 1?y 1 All results in this paper extend to reproducing kernel Hilbert spaces via the representer theorem [24], but for clarity of presentation we will use an explicit feature representation X even though it is not a requirement. 2 A related construction is matching losses [14], which are determined by taking a strictly increasing differentiable transfer function f to be used in prediction via y? = f (z) where z = x? ?. Then, given R z? a transfer f , a loss can be defined by LF (? z kz) = z f (?) ? f (z) d? = F (? z ) ? F (z) ? f (z)(? z ? z) such that F satisfies F ? (z) = f (z). By definition, matching losses are also Bregman divergences, since F is differentiable and the assumptions on f imply that F is strongly convex. These two loss constructions are related by the equality L? (yk? y ) = LF (? z kz) where F is the Legendre-Fenchel conjugate of ? [4, ?3.3], z = f ?1 (y) = ?(y) and z? = f ?1 (? y ) = ?(? y ) [1, 14]. For example, the 1?y is equal to the convex pre-prediction loss post-prediction KL-divergence y ln yy? + (1 ? y) ln 1?? y z? z LF (? z kz) = ln(e + 1) ? ln(e + 1) ? ?(z)(? z ? z) via the transfer y? = ?(? z ) = (1 + e??z )?1 . Such losses are prevalent in regression and probabilistic classification settings. For discrete classification it is also natural to work with a continuous pre-prediction space z? = x? ?, recovering discrete post-predictions y? ? {?1, 1} via a step transfer y? = sign(z). Although a step transfer does not admit the matching loss construction, a surrogate margin loss can be obtained by taking a nonincreasing function l such that limm?? l(m) = 0, then defining Ll (? y , y) = l(y y?). Here y y? is known as the classification margin. Standard examples include misclassification loss, Ll (? y , y) = 1(yy?<0) , support vector machine (hinge) loss, Ll (? y , y) = max(0, 1 ? y y?), binomial deviance loss, Ll (? y , y) = ln(1 + e?yy?) [12], and Adaboost loss, Ll (? y , y) = e?yy? [9]. If the margin loss is furthermore chosen to be convex, efficient minimization can be attained. To unify our presentation below we will simply denote all loss functions by ?(y, x? ?), with the understanding that ?(y, x? ?) = L? (x? ?ky) if the loss is Bregman divergence on potential ?; ?(y, x? ?) = LF (x? ?kf ?1 (y)) if the loss is a matching loss with transfer f ; and ?(y, x? ?) = l(yx? ?) if the loss is a margin loss with margin function l. In each case, the loss is convex in the parameters ?. Note that by their very convexity these losses cannot be robust: all admit unbounded sensitivity to a single outlier (the same is also true for L1 loss when applied to regression). Bounded loss functions: As observed, non-convex loss functions are necessary to bound the effects of outliers [17]. Black and Rangarajan [2] provide a useful catalog of bounded and redescending loss functions for robust regression, of which a representative example is the Geman and McClure loss L(y, y?) = (? y ? y)2 /(? + (? y ? y)2 ) for ? > 0; see Figure 1. Unfortunately, as Figure 1 makes plain, boundedness implies non-convexity (for any non-constant function). It therefore appears that bounded loss functions achieve robustness at the cost of losing global training guarantees. Our goal is to show that robustness and efficient global training are not mutually exclusive. Despite extensive research on regression and classification, almost no work we are aware of (save perhaps [30] in a limited way) attempts to reconcile robustness to outliers with global training algorithms. 3 Loss Clipping Adapting the ideas of [2, 27, 30], given any convex loss ?(y, x? ?) define the clipped loss as ?c (y, x? ?) = min(1, ?(y, x? ?)). (2) Figure 1 demonstrates loss clipping for some standard loss functions. Given a clipped loss, a robust form of training problem (1) can be written as n X ? min k?k22 + ?c (yi , Xi: ?). ? 2 i=1 (3) Clearly such a training objective bounds the influence of any one training example on the final result. Unfortunately, the formulation (3) is not computationally convenient because the optimization problem it poses is neither convex nor smooth. To make progress on the computational question we exploit a key observation: for any loss function, its corresponding clipped loss can be indirectly expressed by an auxiliary optimization of a smooth objective (if the original loss function itself was smooth). That is, given a loss ?(y, x? ?) define the corresponding ?-relaxed loss to be ?? (y, x? ?) = ??(y, x? ?) + 1 ? ? (4) for 0 ? ? ? 1; see Figure 1. This construction is an instance of an outlier process as described in [2] and is motivated by a special case hinge-loss construction originally proposed in [30]. The 3 loss 0 y ? y? 0 y y? 0 y y? Figure 1: Comparing standard losses (dashed) with corresponding ?clipped? losses (solid), ?-relaxed losses (dotted), and non-convex robust losses (dash-dotted). Left: squared loss (dashed), clipped (solid), 1/3-relaxed (dotted), robust Geman and McClure loss [2] (dash-dotted). Center: SVM hinge loss (dashed), clipped [27, 30] (solid), 1/2-relaxed (upper dotted), robust 1 ? tanh(y y?) loss [19] (dash-dotted). Right: Adaboost exponential loss (dashed), clipped (solid), 1/2-relaxed (upper dotted), robust 1 ? tanh(y y?) loss [19] (dash-dotted). ?-relaxation provides a convenient characterization of any clipped loss, since it can be shown in general that minimizing a corresponding ?-relaxed loss is equivalent to minimizing the clipped loss. Proposition 1 For any loss function ?(y, x? ?), we have ?c (y, x? ?) = min0???1 ?? (y, x? ?). (The proof is straightforward, but it is given in the supplement for completeness.) Proposition 1 now allows us to reformulate (3) as a smooth optimization using the fact that the optimization is completely separable between the ?i variables: n X ? ?i ?(yi , Xi: ?) + 1 ? ?i . (5) (3) = min min k?k22 + ? 0???1 2 i=1 Unfortunately, the resulting problem is not jointly convex in ? and ? even though it is convex in each given the other. Such marginal convexity might suggest that an alternating minimization strategy, however the proof of Proposition 1 shows that each minimization over ? will result in ?i = 0 for losses greater than 1, or ?i = 1 for losses less than 1. Such discrete assignments immediately causes the search to get trapped in local minima, requiring a more sophisticated approach to be considered. 4 A Convex Relaxation One contribution of this paper is to derive an exact reformulation of (5) that admits a convex relaxation and rounding scheme that retain bounded sensitivity to outliers. We first show how the relaxation can be efficiently solved by a scalable algorithm that eliminates any need for semidefinite programming, then provide a guarantee of bounded outlier sensitivity in Section 5. Reformulation: To ease the notational burden, let us rewrite (5) in matrix-vector form (5) = min min R(?, ?) 0???1 ? (6) ? k?k2 + ?? ?(y, X?) + 1? (1 ? ?). (7) 2 Here 1 denotes the vector of all 1s, and it is understood that ?(y, X?) refers to the n ? 1 vector of individual training losses. Given that ?(?, ?) is convex in its second argument we will be able to exploit Fenchel duality to re-express the min-min form (6) into a min-max form that will serve as the basis for the subsequent relaxation. In particular, consider the definition where R(?, ?) = ?? (y, ?) = sup ?x? ? ? ?(y, x? ?). (8) ? By construction, ?? (y, ?) is guaranteed to be convex in ? since it is a pointwise maximum over linear functions [4, ?3.2]. Lemma 1 For any convex differentiable loss function ?(y, x? ?) such that the level sets of ?? (v) = ?x? (? ? v) + ?(y, x? v) are bounded, we have ?(y, x? ?) = sup ?x? ? ? ?? (y, ?). ? 4 (9) (This is a standard result, but a proof is given in the supplement for completeness.) For standard losses ?? (y, ?) can be computed explicitly [1, 7]. For example, if ?(y, x? ?) = (y ? x? ?)2 /2 then ?? (y, ?) = ?2 /2 + ?y. Now let ?(?) denote putting ? in the main diagonal of a square matrix and let ?? (y, ?) refer to the n ? 1 vector of dual values over training examples. We can then express the main reformulation as follows. Theorem 1 Let K = XX ? denote the kernel matrix over input data. Then (6) = min 1 1 1 ? ?n+1 1??? ?n+1 1, ?1 = ?n+1 , k?k=1 sup ?(n + 1) ? ? T (?)? (10) ? where ? is an (n + 1) ? 1 variable, ? is an n ? 1 variable, and the matrix T (?) is given by     1 2(1? ?? (y, ?) ? n) (?? (y, ?) + 1)? 1 1? . (11) ?(?)K?(?) [ 1 I ] + T (?) = ? ? (y, ?) + 1 0 8? I 4 The proof consists in first dualizing ? in (6) via Lemma 1, which establishes the key relationship ? = ? ?1 X ? ?(?)?. (12) The remainder of the proof is merely algebra: given ? a solution ? to (10), the corresponding solution ? to (6) can be recovered via ? = 21 (1 + ? 2:n+1 n + 1). See the supplement for full details. Note that the formulation (10) given in Theorem 1 is exact. No approximation to the problem (6) has been introduced to this point. Unfortunately, as in (6), the formulation (10) is still not directly amenable to an efficient algorithm: the objective is concave in ?, conveniently, but it is not convex in ?. The advantage attained by (10) however is that we can now derive an effective relaxation. Relaxation: Let ?(M ) denote the main diagonal vector of the square matrix M and let tr(M ) denote the trace. Consider the following relaxation (10) ? = min 1 M 0, ?(M )= n+1 1 sup ? sup ?(n + 1) tr(M T (?)) (13) ?(n + 1) tr(M T (?)) (14) ? min 1 M 0, ?(M )= n+1 1 where we used strong minimax duality to obtain (14) from (13): since the constraint region on M is compact and the inner objective is concave and convex in ? and M respectively, Sion?s mini1 max theorem is applicable [22, ?37]. Next enforce the constraint ?(M ) = n+1 1 with a Lagrange multiplier ?: (14) = sup ?, ? = min M 0, tr(M )=1 ?(n + 1) tr(M T (?)) + ?? (1 ? (n + 1)?(M )) sup ?? 1 ? (n + 1) ?, ? max M 0, tr(M )=1 h i tr M (T (?) + ?(?)) . (15) (16) This relaxed formulation (16) is now amenable to efficient global optimization: The outer problem is jointly concave in ? and ?, since it is a pointwise minimum of concave functions. The inner optimization with respect to M can now be simplified by exploiting the well known result [21]: h i h i (17) max tr M (T (?) + ?(?)) = max ? ? T (?) + ?(?) ?. M 0, tr(M )=1 k?k=1 Therefore, given ? and ?, the inner problem is solved by the maximum eigenvector of T (?)+?(?). Optimization Procedure: Given training data, an outer maximization can be executed jointly over ? and ? to maximize (16). This outer problem is concave in ? and ? hence no local maxima exist. Although the outer problem is not smooth, many effective methods exist for nonsmooth convex optimization [20, 31]. Each outer function evaluation (and subgradient calculation) requires the inner problem (17) to be solved. Fortunately, a simple power method [10] can be used to efficiently compute a maximum eigenvector solution to the inner problem by only performing matrix-vector multiplications on the individual factors of the two low rank matrices making up T (?), meaning the inner problem can be solved without ever forming a large n ? n matrix T (?). That is, if X is n ? m each inner iteration requires at most O(nm) computation. 5 Solution Recovery: At a solution, the values of (13)?(16) are equal, and all provide a lower bound on the original objective (6). Ideally, given a maximizer ?? for (14) one would recover a prediction model ? via (12). However, (12) requires ? to be acquired first, which could be obtained from a ? that solves (10). Unfortunately, the relaxation step taken in (13) means that the solution to (14) (recovered from the ? that solves (17)) does not necessarily solve (10): the inner solution ? in (17) might not be unique. If it is unique, we immediately have the optimal solution to (10) hence an exact solution to the original problem (6). More typically, however, the maximum eigenvector is not unique at (?? , ?? ), meaning that a gap has been introduced?this occurs if and only if the inner solution M ? to (14) is not rank 1. In such cases we need to use a rounding procedure to recover an effective rank 1 approximation. Rounding Method: Given the inner maximizer (?? , ?? ) of (16) we do not need to explicitly construct the outer minimizer M ? . Instead, it suffices to construct a basis for M ? by collecting the ? k } of T (?? ) + ?(?? ) in (17) (note that k is usually set of maximum eigenvectors V? = {? ? 1 , ..., ? much smaller than n + 1). A solution can then be indirectly obtained by solving a small semidefinite 1 program to recover a k ? k matrix C ? that satisfies C ?  0 and ?(V? C ? V? ? ) = n+1 1. Note that P k ? C ? = Q? ?? Q? for some orthonormal Q? and diagonal ?? where ?j? ? 0 and j=1 ?j? = 1, hence ? M ? = V ? ?? V ? ? such that V ? = {? ?1 , ..., ? ?k } = V? Q? . Given V ? and ?? a rounded solution for ?  ? Pk 1 ? ? ? ? ? 2:n+1 n + 1 . ? = j=1 ?j ? j then setting ? ? = 2 1+? can be recovered simply by computing ? ? ?1 From the constraints on C ? it follows that ??1 ? ? ? ? hence 0 ? ? ? ? 1 ?j (details in j j n+1 n+1 ? from ? ? , we the supplement). Finally, instead of relying on (12) to recover the model parameters ? ? ? ? -relaxed loss (7) given ? ? to recover ? via ? = arg min? R(? explicitly minimize the ? ?, ?). Although the rounding step has introduced an approximation, we establish that bounded outlier sensitivity can still be retained, even after the above relaxation and rounding processes, and demonstrate experimentally that the gap from optimality is generally not too large. 5 Bounding Outlier Sensitivity Thus far we have proposed a robust training objective, provided an efficient convex relaxation that establishes a lower bound, and proposed a simple rounding method for recovering an approximate solution. The question remains as to whether the approximate solution retains bounded sensitivity to outliers (or to leverage points [23, ?1.1]). Let (?? , ? ? ) denote the joint minimizer of (6) and let ? denote the approximate solution obtained from the procedure above. (? ?, ?) First, observe that an upper bound on the approximation error can be easily computed by subtract? Our experiments below show that ing the lower bound value obtained in (14)?(16) from R(? ?, ?). reasonable gaps are obtained in this way. Nevertheless one would still like to guarantee that the gap stays bounded in the presence of arbitrary outliers and leverage points. ? ?, ?? ) ? 2R(?? , ? ? ) ? 2n, where R(? ? ?, ?? ) is the value of (10) at the rounded Theorem 2 R(? ? . Furthermore, if the unclipped loss ?(y, y?) is b-Lipschitz in y? for b < ? and either y or solution ? ? ? c. K remains bounded, then there exists a c < ? such that R(? ?, ?) That is, the ?-relaxed loss obtained by the rounded solution stays bounded in this case, even when accounting for the proposed relaxation and rounding procedure and data perturbation. The complete proof takes some space, however the key steps are to show that ?(n+1)tr(M ? T (?? )) ? R(?? , ? ? ), ? ? c, respectively (full details ? ?, ?? ) ? 2R(?? , ? ? ) and R(? and then use this to establish that R(? ?, ?) ? in the supplement). Thus, (? ?, ?) will not chase outliers or leverage points arbitrarily in this situation. Note that the proposed method cannot be characterized by minimizing a fixed convex loss. That is, the tightest convex upper bound for any convex loss function is simply given by the function itself, which corresponds to setting ?i = 1 for every training example. By contrast, our approximation method does not choose a constant ?i = 1 for every training example, but instead adaptively chooses ?i values, closer to 1 for inliers and closer to 0 for outliers. The resulting upper bound on the clipped loss (hence on the misclassification error in the margin loss case) is much tighter than that achieved by simply minimizing a convex loss. This outcome is demonstrated clearly in our experiments. 6 60 60 40 40 40 20 20 20 Data ClipAlt (local) L1 L2 ClipRelax ?20 ?40 ?60 ?6 0 ?4 ?2 0 2 4 Data ClipAlt (local) L1 L2 ClipRelax ?20 ?40 6 ?60 ?6 0 y 0 y y 60 ?4 ?2 0 2 4 Data ClipAlt (local) L1 L2 ClipRelax ?20 ?40 6 ?60 ?6 ?4 ?2 x x (a) 0 2 4 6 x (b) (c) Figure 2: Comparison on three demonstration data sets. Loss L2 L1 HuberM GM (local) ClipAlt (local) ClipRelax OptimGap p = 0.0 2.53 ? 0.0015 2.53 ? 0.0015 2.52 ? 0.0015 2.53 ? 0.0015 2.53 ? 0.0019 2.53 ? 0.0016 1.65% ? 0.31% outlier probability p = 0.2 25.11 ? 13.78 26.52 ? 16.09 12.02 ? 5.33 2.60 ? 0.10 2.75 ? 0.27 2.68 ? 0.12 0.10% ? 0.22% p = 0.4 19.04 ? 15.62 27.14 ? 22.40 12.30 ? 5.87 2.62 ? 0.09 2.81 ? 0.27 2.53 ? 0.87 0.70% ? 1.31% Table 1: Synthetic experiment with n = 200, m = 5, and t = 500. Test error rates (RMSE) on clean data (average ? standard deviations) at different outlier probabilities p, 20 repeats. The bottom row shows the relative gap obtained between the ?-relaxed loss of the rounded solution and the computed lower bound (16). 6 Experimental Results In this section, we experimentally evaluate the preceding technical developments on synthetic and real data for both regression and classification. Regression: We first illustrate the behavior of the various regression techniques by a simple demonstration. In Figure 2 (a) and (b), we generate a cluster of linearly related data y = x in a small interval about the origin, then add outliers. In Figure 2 (c) the target linear model is mixed with another more dispersed model. We compare the behaviours of standard regression losses: least-squares (L2), L1 (L1), the Huber minimax loss (HuberM) [13, 17], and the robust Geman and McClure loss (GM) [2]. To these we compare the proposed relaxed method (ClipRelax), along with an alternating minimizer of the clipped loss (ClipAlt). (In this problem the value of ? has little effect, and is simply set to 0.1.) Figure 2 demonstrates that the three convex losses, L2, L1 and HuberM, are dominated by outliers. By contrast, ClipRelax successfully found the correct linear model in each case. Note that the robust GM loss finds two different minima, corresponding to that of L2 and ClipRelax respectively, hence it was not depicted in the plot. ClipAlt also gets trapped in local minima as expected: it finds the correct model in Figure 2 (a) but incorrect models in Figure 2 (b) and (c). In our second synthetic regression experiment we consider larger problems. Here a target weight vector ? is drawn from N (0, I), with inputs Xi: sampled uniformly from [0, 1]m , m = 5. The outputs yi are computed as yi = Xi: ? + ?i , ?i ? N (0, 14 ). We then seed the data set with outliers by randomly re-sampling each yi (and Xi: ) from N (0, 105 ) and N (0, 102 ) respectively, governed by an outlier probability p. Here 200 of the 700 examples are randomly chosen as the training set and the rest used for testing. We compare the same six methods: L2, L1, HuberM, GM, ClipAlt and ClipRelax. The regularization parameter ? was set on a separate validation set. These experiments are repeated 20 times and average (Huber loss) test errors on clean data are reported (with standard deviations) in Table 1. Clearly, the outliers significantly affect the performance of least squares. In this case the proposed relaxation performs comparably to the the non-convex GM loss. Interestingly, this experiment shows that the relative gap between the ?-robust loss obtained by the proposed method and the lower bound on the optimal ?-robust loss (16) remains remarkably small, indicating our robust relaxation (almost) optimally minimizes the original non-convex clipped loss. 7 Loss L2 L1 HuberM GM (local) ClipAlt (local) ClipRelax Astronomy Cal-housing (1, 46, 46) (8, 100, 1000) Pumadyn (32, 500, 1000) 2.484 0.170 0.149 0.166 0.176 0.131 804.5 ? 892.5 0.325 ? 0.046 0.306 ? 0.050 0.329 ? 0.048 0.329 ? 0.048 0.136 ? 0.155 1.300 ? 105 ? 68.29 5.133 ? 0.056 5.377 ? 0.007 4.399 ? 0.003 4.075 ? 1 ? 10?6 4.075 ? 1 ? 10?6 Table 2: Error rates (average root mean squared error, ? standard deviations) for the different regression estimators on various data sets. The values of (m, n, tt) are indicated for each data set, where m is the number of features, n is the number of training examples, and tt the number of testing samples. Loss Logit 1-tanh (local) ClipAlt (local) ClipRelax p = 0.02 4.88 ? 6.17 0.91 ? 1.93 0.46 ? 0.64 0.26 ? 0.34 p = 0.05 1.61 ? 7.23 2.30 ? 2.85 1.51 ? 1.45 0.78 ? 0.78 p = 0.1 17.67 ? 4.00 6.49 ? 4.32 4.27 ? 2.57 2.49 ? 3.38 p = 0.2 19.53 ? 2.91 13.96 ? 3.38 11.32 ? 3.48 10.10 ? 8.21 Table 3: Misclassification error rates on clean data (average error, ? standard deviations) on the Long-Servedio problem [16] with increasing noise levels p. Finally, we investigated the behavior of the regression methods on a few real data sets. We chose three data sets: astronomy data containing outliers from [23], and two UCI data sets, seeding the the UCI data sets with outliers. Test results are reported on clean data to avoid skewing the reported results. For UCI data, outliers were added by resampling Xi: and yi from N (0, 1000), with 5% outliers. The regularization parameter ? was chosen through 10-fold cross validation on the training set. Note that in real regression problems one needs to obtain an estimate for the the scale, given by the true standard deviation of the noise in the data. Here we estimated the scale using the mean absolute deviation, a robust approach commonly used in the robust statistics literature [17]. In Table 2, one can see that on both data sets, ClipRelax clearly outperformed the other methods. L2 is clearly skewed by the outliers. Unsurprisingly, the classical robust loss functions, L1 and HuberM, perform better than L2 in the presence of outliers, but not as well as ClipRelax. Classification: We investigated the well known case study from [16] and compared the proposed method to logistic regression (i.e. the logit, or binomial deviance loss [12]) and the robust 1 ? tanh loss [19] in a classification context. Here 200 examples were drawn from the target distribution with label noise applied at various levels. The experiment was repeated 50 times to obtain average results and standard deviations. Table 3 shows the test error performance in clean data of the different methods. From these results one can conclude that ClipRelax is more robust than standard logit training. Training with logit loss is slightly better than the tanh loss algorithm in terms of training loss, but not very significantly. It is interesting to see that when the prediction error is measured on clean labels ClipRelax generalizes significantly better than the robust 1?tanh loss. This implies that the classification model produced by ClipRelax is closer to the true model despite of the presence of outliers, demonstrating that the proposed method can be robust in a simple classification context. 7 Conclusion We have proposed a robust estimation method for regression and classification based on a notion of ?loss-clipping?. Although the method is not as fast as standard convex training, it is scalable to problems of moderate size. The key benefit is competitive (or better) estimation quality than the state-of-the-art in robust estimation, while ensuring provable robustness to outliers and computable bounds on the optimality gap. To the best of our knowledge these two properties have never been previously achieved simultaneously. It would be interesting to investigate whether the techniques developed can also be applied to other forms of robust estimators from the classical literature, including GM, MM, L, R and S estimators [11, 13, 17, 23]. Connections with algorithmic stability [3] and influence function based analysis [5, 6, 11] merit further investigation. Obtaining tighter bounds on approximation quality that would enable a proof of consistency also remains an important challenge. 8 References [1] A. Banerjee, S. Merugu, I. Dhillon, and J. Ghosh. Clustering with Bregman divergences. Journal of Machine Learning Research, 6:1705?1749, 2005. [2] M. Black and A. Rangarajan. On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. International Journal of Computer Vision, 19(1):57?91, 1996. [3] O. Bousquet and A. Elisseeff. Stability and generalization. J. of Machine Learning Research, 2, 2002. [4] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge U. Press, 2004. [5] A. Christmann and I. Steinwart. On robustness properties of convex risk minimization methods for pattern recognition. Journal of Machine Learning Research, 5:1007?1034, 2004. [6] A. Christmann and I. Steinwart. Consistency and robustness of kernel-based regression in convex risk minimization. Bernoulli, 13(3):799?819, 2007. [7] M. Collins, R. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 48, 2002. [8] Y. Freund. A more robust boosting algorithm, 2009. arXiv.org:0905.2138. [9] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and Systems Sciences, 55(1):119?139, 1997. [10] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins U. Press, 1996. [11] F. Hampel, E. Ronchetti, P. Rousseeuw, and W. Stahel. Robust Statistics: The Approach Based on Influence Functions. Wiley, 1986. [12] T. Hastie, R. Tibshirani, and J. Friedman. Elements of Statistical Learning. Springer, 2nd edition, 2009. [13] P. Huber and E. Ronchetti. Robust Statistics. Wiley, 2nd edition, 2009. [14] J. Kivinen and M. Warmuth. Relative loss bounds for multidimensional regression problems. Machine Learning, 45:301?329, 2001. [15] N. Krause and Y. Singer. Leveraging the margin more carefully. In Proceedings of the International Conference on Machine Learning (ICML), 2004. [16] P. Long and R. Servedio. Random classification noise defeats all convex potential boosters. Machine Learning, 78:287?304, 2010. [17] R. Maronna, R.D. Martin, and V. Yohai. Robust Statistics: Theory and Methods. Wiley, 2006. [18] H. Masnadi-Shirazi and N. Vasconcelos. On the design of loss functions for classification: theory, robustness to outliers, and SavageBoost. In Advances in Neural Information Processing Systems (NIPS), volume 21, pages 1049?1056, 2008. [19] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Functional gradient techniques for combining hypotheses. In Advances in Large Margin Classifiers. MIT Press, 2000. [20] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer, 1994. [21] M. Overton and R. Womersley. Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices. Mathematical Programming, 62(2):321?357, 1993. [22] R. Rockafellar. Convex Analysis. Princeton U. Press, 1970. [23] P. Rousseeuw and A. Leroy. Robust Regression and Outlier Detection. Wiley, 1987. [24] B. Schoelkopf and A. Smola. Learning with Kernels. MIT Press, 2002. [25] X. Shen, G. Tseng, X. Zhang, and W.-H. Wong. On ?-learning. Journal of the American Statistical Association, 98(463):724?734, 2003. [26] C. Stewart. Robust parameter estimation in computer vision. SIAM Review, 41(3), 1999. [27] Y. Wu and Y. Liu. Robust truncated hinge loss support vector machines. Journal of the American Statistical Association, 102(479):974?983, 2007. [28] H. Xu, C. Caramanis, and S. Mannor. Robust regression and Lasso. In Advances in Neural Information Processing Systems (NIPS), volume 21, pages 1801?1808, 2008. [29] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. Journal of Machine Learning Research, 10:1485?1510, 2009. [30] L. Xu, K. Crammer, and D. Schuurmans. Robust support vector machine training via convex outlier ablation. In Proceedings of the National Conference on Artificial Intelligence (AAAI), 2006. [31] J. Yu, S. Vishwanathan, S. G?unter, and N. Schraudolph. A quasi-Newton approach to nonsmooth convex optimization problems in machine learning. J. of Machine Learning Research, 11:1145?1200, 2010. 9
3936 |@word logit:4 nd:2 accounting:1 elisseeff:1 ronchetti:2 tr:10 solid:4 boundedness:3 liu:1 interestingly:1 recovered:3 comparing:1 yet:1 written:2 john:1 realize:1 subsequent:1 seeding:1 plot:1 resampling:1 intelligence:1 warmuth:1 stahel:1 prespecified:1 provides:3 boosting:3 characterization:1 completeness:2 mannor:2 org:1 zhang:1 unbounded:6 mathematical:1 along:1 incorrect:1 consists:1 introductory:1 acquired:1 huber:3 expected:1 behavior:2 nor:3 inspired:1 globally:1 relying:1 alberta:1 little:1 considering:1 increasing:2 provided:1 xx:1 mitigated:1 bounded:17 minimizes:1 eigenvector:3 developed:1 astronomy:2 ghosh:1 guarantee:6 every:2 collecting:1 multidimensional:1 concave:5 demonstrates:2 k2:1 classifier:1 control:1 normally:1 before:1 understood:3 local:13 despite:2 might:3 black:2 chose:1 ease:1 limited:2 range:3 unique:3 testing:2 practice:1 lf:4 procedure:7 thought:1 adapting:1 matching:4 convenient:2 pre:2 deviance:2 refers:1 significantly:3 boyd:1 suggest:1 get:2 cannot:4 cal:1 context:3 influence:4 risk:2 wong:1 equivalent:1 demonstrated:1 center:1 straightforward:1 convex:51 shen:1 unify:1 recovery:1 immediately:2 estimator:7 dominate:1 orthonormal:1 vandenberghe:1 stability:4 notion:4 target:5 construction:7 gm:7 ualberta:1 exact:3 losing:1 programming:2 hypothesis:1 origin:1 element:1 recognition:1 geman:3 observed:1 bottom:1 solved:5 worst:1 region:1 ensures:1 schoelkopf:1 e8:1 yk:1 mentioned:1 convexity:6 ideally:1 nesterov:1 rewrite:1 solving:1 algebra:1 dilemma:1 serve:1 efficiency:1 completely:1 basis:2 easily:1 joint:1 dualizing:1 various:3 caramanis:2 fast:1 effective:3 artificial:1 outcome:1 apparent:1 larger:1 solve:1 statistic:9 jointly:3 itself:3 final:1 obviously:2 chase:1 advantage:1 differentiable:4 housing:1 eigenvalue:1 propose:2 remainder:1 causing:1 relevant:1 uci:3 combining:1 ablation:1 achieve:4 insensitivity:1 ky:4 exploiting:1 cluster:1 requirement:1 rangarajan:2 guaranteeing:1 leave:1 depending:1 derive:2 illustrate:1 frean:1 pose:1 measured:2 progress:1 solves:2 strong:1 recovering:2 c:1 auxiliary:1 trading:1 come:1 implies:2 christmann:2 min0:1 correct:2 enable:1 require:2 behaviour:1 suffices:1 generalization:2 investigation:1 proposition:3 pessimistic:1 tighter:2 strictly:1 mm:1 practically:1 considered:2 seed:1 algorithmic:3 achieves:1 early:1 estimation:5 outperformed:1 applicable:2 label:2 tanh:6 largest:1 establishes:2 successfully:1 minimization:8 mit:2 clearly:5 always:1 rather:1 unclipped:1 avoid:2 sion:1 focus:1 notational:1 prevalent:1 rank:3 bernoulli:1 contrast:2 eliminate:1 typically:1 yaoliang:2 limm:1 quasi:1 arg:1 classification:19 dual:1 denoted:1 retaining:1 development:1 art:2 special:2 marginal:1 field:1 equal:2 aware:1 construct:2 never:1 sampling:1 vasconcelos:1 yu:2 icml:1 nearly:1 representer:1 nonsmooth:2 few:1 masnadi:1 randomly:2 simultaneously:1 divergence:6 national:1 individual:2 intended:1 ab:1 attempt:3 friedman:1 detection:1 possibility:1 investigate:1 evaluation:1 golub:1 extreme:1 semidefinite:2 inliers:2 nonincreasing:1 amenable:2 overton:1 bregman:5 edge:1 closer:3 unification:1 necessary:3 unter:1 loosely:1 taylor:1 re:2 minimal:1 fenchel:2 instance:1 modeling:1 retains:1 stewart:1 assignment:1 maximization:1 clipping:6 cost:2 tractability:1 deviation:9 predictor:3 rounding:8 too:1 optimally:1 reported:3 corrupted:1 synthetic:3 chooses:1 adaptively:1 international:2 sensitivity:14 siam:1 retain:1 stay:2 probabilistic:1 off:1 rounded:4 hopkins:1 pumadyn:1 squared:2 aaai:1 recorded:1 nm:1 containing:1 choose:1 possibly:1 admit:2 creating:1 booster:1 american:2 potential:6 rockafellar:1 explicitly:4 view:1 root:1 sup:7 reached:2 competitive:1 recover:5 rmse:1 contribution:1 minimize:1 square:5 merugu:1 efficiently:3 yield:2 correspond:2 comparably:1 produced:1 none:1 definition:3 infinitesimal:1 against:3 servedio:2 proof:7 mi:1 sampled:1 knowledge:1 ubiquitous:1 hilbert:1 sophisticated:1 carefully:1 appears:1 attained:2 originally:1 supervised:1 adaboost:3 response:2 improved:2 formulation:4 though:2 strongly:2 furthermore:2 smola:1 hand:1 steinwart:2 maximizer:2 banerjee:1 logistic:2 quality:3 perhaps:1 indicated:1 shirazi:1 effect:3 k22:4 requiring:1 true:3 multiplier:1 analytically:1 regularization:4 equality:1 alternating:2 hence:6 dhillon:1 symmetric:1 white:1 undesired:1 ll:5 skewed:1 criterion:1 generalized:1 complete:1 demonstrate:5 tt:2 theoretic:1 performs:1 motion:1 l1:11 image:1 meaning:2 common:1 womersley:1 functional:1 defeat:1 volume:2 extend:1 association:2 kluwer:1 refer:1 cambridge:1 imposing:1 consistency:2 had:1 stable:1 add:1 dominant:1 own:1 moderate:1 continue:1 arbitrarily:1 yi:9 analyzes:1 greater:1 relaxed:12 impose:1 minimum:4 employed:1 fortunately:1 preceding:1 maximize:1 dashed:4 multiple:1 full:2 smooth:5 ing:1 technical:1 characterized:1 calculation:1 cross:1 mcclure:3 long:2 schraudolph:1 dept:1 post:2 equally:1 ensuring:1 prediction:9 scalable:4 regression:27 vision:5 arxiv:1 iteration:1 kernel:4 whitem:1 achieved:4 background:1 remarkably:1 krause:1 interval:1 eliminates:1 rest:1 leveraging:1 yang:1 leverage:4 exceed:1 presence:4 conception:1 baxter:1 affect:1 hastie:1 lasso:1 inner:10 idea:1 computable:1 whether:2 motivated:1 six:1 bartlett:1 reformulated:1 cause:1 linli:2 skewing:1 generally:2 useful:1 eigenvectors:1 rousseeuw:2 extensively:1 maronna:1 generate:1 schapire:2 exist:2 dotted:8 restrictively:1 sign:1 trapped:2 estimated:1 tibshirani:1 yy:4 discrete:3 express:2 key:4 putting:1 reformulation:3 threshold:1 nevertheless:2 demonstrating:1 drawn:3 clarity:1 prevent:1 neither:1 clean:6 relaxation:16 merely:1 fraction:1 sum:2 subgradient:1 counteract:1 clipped:12 almost:2 reasonable:1 wu:1 decision:1 scaling:1 bound:16 dash:4 guaranteed:1 fold:1 leroy:1 constraint:3 vishwanathan:1 bousquet:1 dominated:1 mestimators:1 argument:1 min:17 optimality:3 performing:1 separable:1 martin:1 truncate:1 legendre:1 conjugate:1 smaller:1 slightly:1 making:2 outlier:48 invariant:2 counterpoint:1 restricted:1 taken:1 ln:9 computationally:1 mutually:1 remains:4 previously:1 describing:1 singer:2 merit:1 tractable:1 generalizes:1 tightest:1 observe:1 generic:2 indirectly:2 enforce:1 save:1 alternative:3 robustness:18 original:4 hampered:1 denotes:3 binomial:2 ensure:1 include:1 clustering:1 hinge:4 newton:1 yx:1 neglect:1 exploit:2 establish:2 classical:5 objective:6 question:3 added:1 occurs:1 strategy:1 exclusive:1 diagonal:3 surrogate:1 exhibit:1 gradient:1 distance:1 separate:2 outer:6 collected:1 considers:1 trivial:1 tseng:1 provable:1 assuming:1 pointwise:2 relationship:1 reformulate:1 retained:1 minimizing:9 demonstration:2 unfortunately:7 executed:1 trace:1 design:1 redescending:2 perform:1 upper:5 observation:2 incorrectly:1 truncated:1 defining:1 situation:1 severity:1 ever:1 perturbation:2 reproducing:1 arbitrary:4 canada:1 introduced:3 kl:2 extensive:1 connection:1 catalog:1 established:2 discontinuity:1 nip:2 address:1 able:1 below:2 usually:1 pattern:1 challenge:1 program:1 including:2 max:6 power:1 misclassification:4 natural:3 regularized:1 hampel:1 solvable:2 kivinen:1 minimax:2 scheme:1 imply:1 created:1 review:1 prior:1 literature:7 l2:12 understanding:1 kf:1 multiplication:1 asymptotic:1 relative:3 unsurprisingly:1 loss:122 freund:2 bear:1 yohai:1 mixed:1 interesting:3 lecture:1 validation:2 degree:1 t6g:1 story:1 row:2 repeat:1 side:1 taking:4 absolute:1 benefit:1 van:1 plain:1 kz:3 dale:2 forward:1 made:2 commonly:1 simplified:1 far:1 approximate:3 compact:1 global:9 investigating:1 conclude:1 xi:9 continuous:1 search:1 table:6 nature:1 reasonably:1 robust:44 ca:1 inherently:1 transfer:7 obtaining:2 schuurmans:2 investigated:4 necessarily:2 domain:2 pk:1 main:4 linearly:1 bounding:2 noise:5 reconcile:1 edition:2 repeated:2 xu:4 representative:1 edmonton:1 deployed:1 wiley:4 explicit:1 exponential:1 governed:1 atypical:1 theorem:5 mason:1 svm:1 admits:1 burden:1 exists:1 supplement:5 magnitude:1 margin:9 gap:7 subtract:1 rejection:1 depicted:1 led:1 specularities:1 simply:5 forming:1 conveniently:1 lagrange:1 expressed:1 applies:1 springer:1 corresponds:1 minimizer:3 satisfies:2 relies:1 dispersed:1 savageboost:1 goal:5 identity:1 presentation:2 replace:1 lipschitz:1 martha:1 change:1 experimentally:2 determined:1 loan:1 uniformly:1 lemma:2 duality:3 experimental:1 indicating:1 support:5 crammer:1 collins:1 evaluate:1 princeton:1
3,242
3,937
A Rational Decision-Making Framework for Inhibitory Control Rajesh P. N. Rao Department of Computer Science University of Washington [email protected] Pradeep Shenoy Department of Cognitive Science University of California, San Diego [email protected] Angela J. Yu Department of Cognitive Science University of California, San Diego [email protected] Abstract Intelligent agents are often faced with the need to choose actions with uncertain consequences, and to modify those actions according to ongoing sensory processing and changing task demands. The requisite ability to dynamically modify or cancel planned actions is known as inhibitory control in psychology. We formalize inhibitory control as a rational decision-making problem, and apply to it to the classical stop-signal task. Using Bayesian inference and stochastic control tools, we show that the optimal policy systematically depends on various parameters of the problem, such as the relative costs of different action choices, the noise level of sensory inputs, and the dynamics of changing environmental demands. Our normative model accounts for a range of behavioral data in humans and animals in the stop-signal task, suggesting that the brain implements statistically optimal, dynamically adaptive, and reward-sensitive decision-making in the context of inhibitory control problems. 1 Introduction In natural behavior as well as in engineering applications, there is often the need to choose, under time pressure, an action among multiple options with imprecisely known consequences. For example, consider the decision of buying a house. A wise buyer should collect sufficient data to make an informed decision, but waiting too long might mean missing out on a dream home. Thus, balanced against the informational gain afforded by lengthier deliberation is the opportunity cost of inaction. Further complicating matters is the possible occurrence of a rare and unpredictably timed adverse event, such as job loss or serious illness, that would require a dynamic reformulation of one?s plan of action. This ability to dynamically modify or cancel a planned action that is no longer advantageous or appropriate is known as inhibitory control in psychology. In psychology and neuroscience, inhibitory control has been studied extensively using the stopsignal (or countermanding) task [17]. In this task, subjects perform a simple two-alternative forced choice (2AFC) discrimination task on a go stimulus, whereby one of two responses is required depending on the stimulus. On a small fraction of trials, an additional stop signal appears after some delay, which instructs the subject to withhold the discrimination or go response. As might be expected, the later the stop signal appears, the harder it is for subjects to stop the response [9] (see Figure 3). The classical model of the stop-signal task is the race model [11], which posits a race to threshold between independent go and stop processes. It also hypothesizes a stopping latency, the stop-signal reaction time (SSRT), which is the delay between stop signal onset and successful withholding of a go response. The (unobservable) SSRT is estimated as shown in Figure 1A, and is 1 thought to be longer in patient populations associated with inhibitory deficit than in healthy controls (attention-deficit hyperactivity disorder [1], obsessive-compulsive disorder [12], and substance dependence [13]). Some evidence suggests a neural correlate of the SSRT [8, 14, 5]. Although the race model is elegant in its simplicity and captures key experimental data, it is descriptive in nature and does not address how the stopping latency and other elements of the model depend on various underlying cognitive factors. Consequently, it cannot explain why behavior and stopping latency varies systematically across different experimental conditions or across different subject populations. We present a normative, optimal decision-making framework for inhibitory control. We formalize interactions among various cognitive components: the continual monitoring of noisy sensory information, the integration of sensory inputs with top-down expectations, and the assessment of the relative values of potential actions. Our model has two principal components: (1) a monitoring process, based on Bayesian statistical inference, that infers the go stimulus identity within each trial, as well as task parameters across trials, (2) a decision process, formalized in terms of stochastic control, that translates current belief state based on sensory inputs into a moment-by-moment valuation of whether to choose one of the two go responses, or to wait longer. Given a certain belief state, the relative values of the various actions depend both on experimental parameters, such as the fraction of stop trials and the difficulty of go stimulus discrimination, as well as subject-specific parameters, such as learning rate and subjective valuation of rewards and costs. Within our normative model of inhibitory control, stopping latency is an emergent property, arising from interactions between the monitoring and decision processes. We show that our model captures classical behavioral data in the task, makes quantitative behavioral predictions under different experimental manipulations, and suggests that the brain may be implementing near-optimal decision-making in the stop-signal task. 2 Sensory processing as Bayes-optimal statistical inference We model sensory processing in the stop-signal task as Bayesian statistical inference. In the generative model (see Figure 1B for graphical model), there are two independent hidden variables, corresponding to the identity of the go stimulus, d ? {0, 1}, and whether or not the current trial is a stop trial, s ? {0, 1}. Priors over d and s reflect experimental parameters: e.g. P (d = 1) = .5, P (s = 1) = .25 in typical stop signal experiments. Conditioned on d, a stream of iid inputs are generated on each trial, x1 , . . . , xt , . . ., where t indexes small increments of time within a trial, p(xt |d = 0) = f0 (xt ), and p(xt |d = 1) = f1 (xt ). For simplicity, we assume f0 and f1 to be Bernoulli distributions with distinct rate parameters qd and 1?qd , respectively. The dynamic variable z t denotes the presence/absence of the stop signal: if the stop signal appears at time ? then z 1 = . . . = z ??1 = 0 and z ? = z ?+1 = . . . = 1. On a go trial, s = 0, the stop-signal of course never appears, P (? = ?) = 1. On a stop trial, s = 1, we assume for simplicity that the onset of the stop signal follows a constant hazard rate, i.e. ? is generated from an exponential distribution: p(?|s = 1) = ?e??? . Conditioned on z t , there is a separate iid stream of observations associated with the stop signal: p(y t |z t = 0) = g0 (y t ), and p(y t |z t = 1) = g1 (y t ). Again, we assume for simplicity that g0 and g1 are Bernoulli distributions with distinct rate parameters qs and 1 ? qs , respectively. In the recognition model, the posterior probability associated with signal identity ptd , P (d = 1|xt ), where xt , {x1 , . . . , xt } denotes all the data observed so far, can be computed via Bayes? Rule: ptd = t pt?1 p0d ?ti=1 f1 (xi ) d f1 (x ) = t?1 t t p0d ?ti=1 f1 (xi ) + (1 ? p0d )?ti=1 f0 (xi ) pt?1 d f1 (x ) + (1 ? pd )f0 (x ) Inference about the stop signal is slightly more complicated due to the dynamics in z t . First, we define ptz as the posterior probability that the stop signal has already appeared ptz , P {? ? t|yt }, where yt , {y 1 , . . . , y t }. It can also be computed iteratively: g1 (y t )(pt?1 + (1 ? pzt?1 )h(t)) z t?1 + (1 ? pz )h(t)) + g0 (y t )(1 ? pt?1 z )(1 ? h(t)) where h(t) is the posterior probability that the stop-signal will appear in the next instant given it has not appeared already, h(t) , P (? = t|? > t?1, yt?1 ). ptz = g1 (y t )(pt?1 z h(t) = r ? P (? = t|s = 1) r?e??t = ??(t?1) r ? P (? > t ? 1|s = 1) + (1 ? r) re + (1 ? r) 2 Figure 1: Modeling inhibitory control in the stop-signal task. (A) shows the race model for behavior in the stop-signal task [11]. Go and stop stimuli, separated by a stop signal delay (SSD), initiate two independent processes that race to thresholds and determine trial outcome. On go trials, noise in the go process results in a broad distribution over threshold-crossing times, i.e., the go reaction time (RT) distribution. The stop process is typically modeled as deterministic, with an associated stop signal reaction time or SSRT. The SSRT determines the fraction of go responses successfully stopped: the go RT cumulative density function evaluated at SSD+SSRT should give the stopping error rate at that SSD. Based on these assumptions, the SSRT is estimated from data given the go RT distribution, and error rate as a function of SSD. (B) Graphical model for sensory input generation in our Bayesian model. Two separate streams of observations, {x1 , . . . , xt , . . .} and {y 1 , . . . , y t , . . .}, are associated with the go and stop stimuli, respectively. xt depend on the identity of the target, d ? {0, 1}. y t depends on whether the current trial is a stop trial, s = {0, 1}, and whether the stop-signal has already appeared by time t, z t ? {0, 1}. where r = P (s = 1) is the prior probability of a stop trial. Note that h(t) does not depend on the observations, since given that the stop signal has not yet appeared, whether it will appear in the next instant does not depend on previous observations. In the stop-signal task, a stop trial is considered a stop trial even if the subject makes the go response early, before the stop signal is presented. Following this convention, we need to compute the posterior probability that the current trial is a stop trial, denoted pts , which depends both on the current belief about the presence of the stop signal, and the expectation that it will appear in the future: pts , P (s = 1|yt ) = ptz ? 1 + (1 ? ptz ) ? P (s = 1|? > t, yt ) where P (s = 1|? > t, yt ) = P (s = 1|? > t) again does not depend on past observations: P (s = 1|? > t) = P (? > t|s = 1)P (s = 1) e??t ? r = ??t P (? > t|s = 1)P (s = 1) + P (? > t|s = 0)P (s = 0) e ? r + 1 ? (1 ? r) Finally, we define the belief state at time t to be the vector bt = (ptd , pts ). Figure 2A shows the evolution of belief states for various trial types: (1) go trials, where no stop signal appears, (2) stop error (SE) trials, where a stop signal is presented but a response is made, and (3) stop success (SS) trials, where the stop signal is successfully processed to cancel the response. For simplicity, only trials where d = 1 are shown, and ?s on stop trials is 17 steps. Due to stochasticity in the sensory information, the go stimulus is processed slower and the stop signal is detected faster than average on some trials; these lead to successful stopping, with SE trials showing the opposite trend. On all trials, ps shows an initial increase due to anticipation of the stop signal. Parameters used for the simulation were chosen to approximate typical experimental conditions (see e.g., Figure 3), and kept constant throughtout except where explicitly noted. The results do not change qualitatively when these settings are varied (data not shown). 3 Decision making as optimal stochastic control In order to understand behavior as optimal decision-making, we need to specify a loss function that captures the reward structure of the task. We assume there is a deadline D for responding on go trials, and an opportunity cost of c per unit time on each trial. In addition, there is a penalty cs for choosing to respond on a stop-signal trial, and a unit cost for making an error on a go trial (by 3 choosing the wrong discrimination response or exceeding the deadline for responding). Because only the relative costs matter in the optimization, we can normalize the coefficients associated with all the costs such that one of them is unit cost. Let ? denote the trial termination time, so that ? = D if no response is made before the deadline, and ? < D if a response is made. On each trial, the policy ? produces a stopping time ? and a possible binary response ? ? {0, 1}. The loss function is: l(?, ?; d, s, ?, D) = c? + cs 1{? <D,s=1} + 1{? <D,?6=d,s=0} + 1{? =D,s=0} where 1{?} is the indicator function. The optimal decision policy minimizes the average or expected loss, L? , hl(?, ?; d, s, D)i, L? = ch? i + cs rP (? < D|s = 1) + (1?r)P (? < D, ? 6= d|s = 0) + (1?r)P (? = D|s = 0) . Minimizing L? over the policy space directly is computationally intractable, but the dynamic programming principle provides an iterative relationship, the optimality equation, in terms of the value function (defined in terms of costs here), V t (bt ) Z  t t t+1 t t+1 t+1 t+1 V (b ) = min p(b |b ; a)V (b )db , a where a ranges over all possible actions. In our model, the action space consists of {go, wait}, with the corresponding expected costs (also known as Q-factors), Qtg (bt ) and Qtw (bt ), respectively. Qtg (bt ) = ct + cs pts + (1 ? pts )min(ptd , 1 ? ptd ) Qtw (bt ) = 1{D>t+1} hV t+1 (bt+1 )|bt ibt+1 + 1{D=t+1} (c(t + 1) + 1 ? pts ) V t (bt ) = min(Qtg , Qtw ) The value function is the smaller of the Q-factors Qtg and Qtw , and the optimal decision policy chooses the action corresponding to the smallest Q-factor. Note that the go action results in either ? = 1 or ? = 0, depending on whether p?d is greater or smaller than .5, respectively. The dependence of Qtw on V t+1 allows us to recursively compute the value function backwards in time. Given V t+1 , we can compute hV t+1 i by summing over the uncertainty about the next observations xt+1 , y t+1 , since the belief state bt+1 is a deterministic function of bt and the observations. X t+1 t+1 t t+1 t+1 t hV (b )|b ibt = p(x , y |b )V t+1 (bt+1 (bt , xt+1 , y t+1 )) xt+1 ,y t+1 p(x t+1 ,y t+1 t |b ) p(xt+1 |ptd ) t+1 = p(xt+1 |ptd )p(y t+1 |pts ) = ptd f1 (xt+1 ) + (1 ? ptd )f0 (xt+1 ) |pts ) p(y = (ptz + (1 ? ptz )h(t + 1))g1 (y t+1 ) + (1 ? ptz )(1 ? h(t + 1))g0 (y t+1 ) The initial condition of the value function can be computed exactly at the deadline since there is only one outcome (subject is no longer allowed to go or stop): V D (bD ) = cD + (1 ? pD s ). We can then compute {V t }D t=1 and the corresponding optimal decision policy backwards in time from t = D?1 to t = 1. In our simulations, we do so numerically by discretizing the probability space for pts into 1000 bins; ptd is represented exactly using its sufficient statistics. Note that dynamic programming is merely a convenient tool for computing the exact optimal policy. Our results show that humans and animals behave in a manner consistent with the optimal policy, indicating that the brain must use computations that are similar in nature. The important question of how such a policy may be computed or approximated neurally will be explored in future work. Figure 2B demonstrates graphically how the Q-factors Qg , Qw evolve over time for the trial types indicated in Figure 2A. Reflecting the sensory processing differences, SS trials show a slower drop in the cost of going, and a faster increase after the stop signal is processed; this is the converse of stop error trials. Note that although the average trajectory Qg does not dip below Qw in the non-canceled (error) stop trials, there is substantial variability in the individual trajectories under a Bernoulli observation model, and each one of them dips below Qw at some point. The histograms show reaction time distributions for go and SE trials. 4 4.1 Results Model captures classical behavioral data in the stop-signal task We first show that our model captures the basic behavioral results characteristic of the stop-signal task. Figure 3 compares our model predictions to data from Macaque monkeys performing a version 4 Figure 2: Mean trajectories of posteriors and Q-factors. (A) Evolution of the average belief states pd and ps corresponding to go and stop signals, for various trials?GO: go trials, SS: stop trials with successfully canceled response, SE: stop error trials. Stochasticity results in faster or slower processing of the two sensory input streams; these lead to stop success or error. For simplicity, d = 1 for all trials in the figure. The stop signal is presented at ?s = 17 time steps (dashed vertical line); the initial rise in ps corresponds to anticipation of a potential stop signal. (B) Go and Wait costs for the same partitioning of trials, along with the reaction time distributions for go and SE trials. On SE trials, the cost of going drops faster, and crosses below the cost of waiting before the stop signal can be adequately processed. Although the average go cost does not drop below the average wait cost, each individual trajectory crosses over at various time points, as indicated by the RT histograms. Simulation parameters: qd = 0.68, qs = 0.72, ? = 0.1, r = 0.25, D = 50 steps, cs = 50 ? c, where c = 0.005 per time step. c is approximately the rate at which monkeys earn rewards in the task, which is equivalent to assuming that the cost of time (opportunity cost) should be set by the reward rate. Unless otherwise stated, these parameters are used in all the subsequent simulations. Thickness of lines indicates standard errors of the mean. Figure 3: Optimal decision-making model captures classical behavioral effects in the stop-signal task. (A) Inhibition function: errors on stop trials increase as a function of SSD. (B) Effect reproduced by our model. (C) Discrimination RT is faster on non-canceled stop trials than go trials. (D) Effect reproduced by our model. (A,C) Data of two monkeys performing the stopping task (from [9]). of the stop-signal task [9]. One of the basic measures of performance is the inhibition function, which is the average error rate on stop trials as a function of SSD. Error increases as SSD increases, as shown in the monkeys? behavior and also in our model (Figure 3A;B). Another classical result in the stop-signal task is that RT?s on non-canceled (error) stop trials are on average faster than those on go trials (Figure 3C). Our model also reproduces this result (Figure 3D). Intuitively, this is because inference about the go stimulus identity can proceed slowly or rapidly on different trials, due to noise in the observation process. Non-canceled trials are those in which pd happens to evolve rapidly enough for a go response to be initiated before the stop signal is adequately processed. Go trial RT?s, on the other hand, include all trajectories, whether pd happens to evolve quickly or not (see Figure 2). 5 4.2 Effect of stop trial frequency on behavior The overall frequency of stop signal trials has systematic effects on stopping behavior [6]. As the fraction of stop trials is increased, go responses slow down and stop errors decrease in a graded fashion (Figure 4A;B). In our model (Figure 4C;D), the stop signal frequency, r, influences the speed with which a stop signal is detected, whereby larger r leads to greater posterior belief that a stop signal is present, and also greater confidence that a stop signal will appear soon even it has not already. It therefore controls the tradeoff between going and stopping in the optimal policy. If stop signals are more prevalent, the optimal decision policy can use that information to make fewer errors on stop trials, by delaying the go response, and by detecting the stop signal faster. Even in experiments where the fraction of stop trials is held constant, chance runs of stop or go trials may result in fluctuating local frequency of stop trials, which in turn may lead to trial-by-trial behavioral adjustments due to subjects? fluctuating estimate of r. Indeed, subjects speed up after a chance run of go trials, and slow down following a sequence of stop trials [6] (see Figure 4E). We model these effects by assuming that subjects believe that the stop signal frequency rk on trial k has probability ? of being the same as rk?1 and probability 1 ? ? of being re-sampled from a prior distribution p0 (r), chosen in our simulations to be a beta distribution with a bias toward small r (infrequent stop trials). Previous work has shown that this is essentially equivalent to using a causal, exponential window to estimate the current rate of stop trials [20], where the exponential decay constant is monotonically related to the assumed volatility in the environment in the Bayesian model. The probability of trial k being a stop trial, P (sk = 1|sk?1 ), where sk , {s1 , . . . , sk }, is Z Z P (sk = 1|sk?1 ) = P (sk = 1|rk )p(rk |sk?1 )drk = rk p(rk |sk?1 )drk = hrk |sk?1 i . In other words, the predictive probability of seeing a stop trial is just the mean of the predictive distribution p(rk |sk?1 ). We denote this mean as r?k . The predictive distribution is a mixture of the previous posterior distribution and a fixed prior distribution, with ? and 1?? acting as the mixing coefficients, respectively: p(rk |sk?1 ) = ?p(rk?1 |sk?1 ) + (1 ? ?)p0 (rk ) and the posterior distribution is updated according to Bayes? Rule: p(rk |sk ) ? P (sk |rk )p(rk |sk?1 ) . As shown in Figure 4F, our model successfully explains observed sequential effects in behavioral data. Since the majority of trials (75%) are go trials, a chance run of go trials impacts RT much less than a chance run of stop trials. The figure also shows results for different values of ?, with all other parameters kept constant. These values encode different expectations about volatility in the stop trial frequency, and produce slightly different predictions about sequential effects. Thus, ? may be an important source of individual variability observed in the data, along with the other model parameters. Recent data shows that neural activity in the supplementary eye field is predictive of trial-by-trial slowing as a function of the recent stop trial frequency [15]. Moreover, microstimulation of supplementary eye field neurons results in slower responses to the go stimulus and fewer stop errors [16]. Together, this suggests that supplementary eye field may encode the local frequency of stop trials, and influence stopping behavior in a statistically appropriate manner. 4.3 Influence of reward structure on behavior The previous section demonstrated how adjustments to behavior in the face of experimental manipulations can be seen as instances of optimal decision-making in the stop signal task. An important component of the race model for stopping behavior [11] is the SSRT, which is thought to be a stable, subject-specific index of stopping ability. In this section, we demonstrate that the SSRT can be seen as an emergent property of optimal decision-making, and is consequently modified in predictable ways by experimental manipulation. Leotti & Wager showed that subjects can be biased toward stopping or going when the relative penalties associated with go and stop errors are experimentally manipulated [10]. Figure 5A;B show that as subjects are biased toward stopping, they make fewer stop trial errors and have slower 6 Figure 4: Effect of global and local frequency of stop trials on behavior. We compare model predictions with experimental data from a monkey performing the stop-signal task (adapted from Emeric et al., 2007). (A) Go reaction times shift to the right (slower), as the fraction of stop trials is increased. (B) Inhibitory function (stop error rate as a function of SSD) shifts to the right (fewer errors), as the fraction of stop trials is increased. Data adapted from [6]. (C;D) Our model predicts similar effects. (E) Sequential effects in reaction times from 6 subjects showing faster go RTs following longer sequences of go trials (columns 1-3), and slower RTs following longer sequences of stop trials (columns 4-6, data adapted from [6]). (F) Our model reproduces these changes; the parameter ? controls the responsiveness to trial history, and may explain inter-subject differences. Values of alpha: low=0.85, med=0.95, high=0.98. go responses. Our model reproduces this behavior when cs , the parameter representing the cost of stopping, is set to small, medium and high values. Increasing the cost of a stop error induces an increase in reaction time and an associated decrease in the fraction of stop errors. This is a direct consequence of the optimal model attempting to minimize the total expected cost ? with stop errors being more expensive, there is an incentive to slow down the go response in order to minimize the possibility of missing a stop signal. Critically, the SSRT in the human data decreases with increasing bias toward stopping (Figure 5C). Although the SSRT is not an explicit component of our model, we can nevertheless estimate it from the reaction times and fraction of stop errors produced by our model simulations, following the race model?s prescribed procedure [11]. Essentially, the SSRT is estimated as the difference between mean go RT and the SSD at which 50% stop errors are committed (see Figure 1). By reconciling the competing demands of stopping and going in an optimal manner, the estimated SSRT from our simulations is automatically adjusted to mimic the observed human behavior (Figure 5F). This suggests that the SSRT emerges naturally out of rational decision-making in the task. 5 Discussion We presented an optimal decision-making model for inhibitory control in the stop-signal task. The parameters of the model are either set directly by experimental design (cost function, stop frequency and timing), or correspond to subject-specific abilities that can be estimated from behavior (sensory processing); thus, there are no ?free? parameters. The model successfully captures classical behavioral results, such as the increase in error rate on stop trials with the increase of SSD, as well as the decreases in average response time from go trials to error stop trials. The model also captures more subtle changes in stopping behavior, when the fraction of stop-signal trials, the penalties for various types of errors, and the history of experienced trials are manipulated. The classical model for the task 7 Figure 5: Effect of reward on stopping. (A-C) Data from human subjects performing a variant of the stop-signal task where the ratio of rewards for quick go responses and successful stopping was varied, inducing a bias towards going or stopping (Data from [10]). An increased bias towards stopping (i.e., fewer stop errors, (A)) is associated with an increase in the average reaction time on go trials (B), and a decrease in the stopping latency or SSRT (C). (D-F) Our model captures this change in SSRT as a function of the inherent tradeoff between RT and stop errors. Values of cs : low=0.15, med=0.25, high=0.5. (the race model) does not directly explain or quantitatively predict these changes in behavior. Moreover, the stopping latency measure prescribed by the race model (the SSRT) changes systematically across various experimental manipulations, indicating that it cannot be used as a simplistic, global measure of inhibitory control for each subject. Instead, inhibitory control is a multifaceted function of factors such as subject-specific sensory processing rates, attentional factors, and internal/external bias towards stopping or going, which are explicitly related to parameters in our normative model. The close correspondence of model predictions with human and animal behavior suggests that the computations necessary for optimal behavior are exactly or approximately implemented by the brain. We used dynamic programming as a convenient tool to compute the optimal monitoring and decisional computations, but the brain is unlikely to use this computationally expensive method. Recent studies of the frontal eye fields (FEF, [8]) and superior colliculus [14] of monkeys show neural responses that diverge on go and correct stop trials, indicating that they may encode computations leading to the execution or cancellation of movement. It is possible that optimal behavior can be approximated by a diffusion process implementing the race model [4, 19], with the rate and threshold parameters adjusted according to task demands. In future work, we will study more explicitly how optimal decision-making can be approximated by a diffusion model implementation of the race model (see e.g., [18], and how the parameters of such an implementation may be set to reflect task demands. We will also assess alternatives to the race model, in the form of other approximate algorithms, in terms of their ability to capture behavioral data and explain neural data. One major aim of our work is to understand how stopping ability and SSRT arise from various cognitive factors, such as sensitivity to rewards, learning capacity related to estimating stop signal frequency, and the rate at which sensory inputs are processed. This composite view of stopping ability and SSRT may help explain group differences in stopping behavior, in particular, differences in SSRT observed in a number of psychiatric and neurological conditions, such as substance abuse [13], attention-deficit hyperactivity disorder [1], schizophrenia [3], obsessive-compulsive disorder [12], Parkinson?s disease [7], Alzheimer?s disease [2], et cetera. One of our goals for future research is to map group differences in stopping behavior to the parameters of our model, thus gaining insight into exactly which cognitive components go awry in each dysfunctional state. 8 References [1] R.M. Alderson, M.D. Rapport, and M.J. Kofler. Attention-deficit/hyperactivity disorder and behavioral inhibition: a meta-analytic review of the stop-signal paradigm. Journal of Abnormal Child Psychology, 35(5):745?758, 2007. [2] H Amieva, S Lafont, S Auriacombe, N Le Carret, J F Dartigues, J M Orgogozo, and C Fabrigoule. Inhibitory breakdown and dementia of the Alzheimer type: A general phenomenon? Journal of Clinical and Experimental Neuropsychology, 24(4):503?516, 2992. [3] J C Badcock, P T Michie, L Johnson, and J Combrinck. Acts of control in schizophrenia: Dissociating the components of inhibition. Psychological Medicine, 32(2):287?297, 2002. [4] L. Boucher, T.J. Palmeri, G.D. Logan, and J.D. Schall. Inhibitory control in mind and brain: an interactive race model of countermanding saccades. Psychological Review, 114(2):376?397, 2007. [5] CD Chambers, H Garavan, and MA Bellgrove. Insights into the neural basis of response inhibition from cognitive and clinical neuroscience. Neuroscience and Biobehavioral Reviews, 33(5):631?646, 2009. [6] E.E. Emeric, J.W. Brown, L. Boucher, R.H.S. Carpenter, D.P. Hanes, R. Harris, G.D. Logan, R.N. Mashru, M. Par?e, P. Pouget, V. Stuphorn, T.L. Taylor, and J Schall. Influence of history on saccade countermanding performance in humans and macaque monkeys. Vision research, 47(1):35?49, 2007. [7] S Gauggel, M Rieger, and T Feghoff. Inhibition of ongoing responses in patients with Pakingson?s disease. J. Neurol. Neurosurg. Psychiatry, (75):4, 539-544 2004. [8] D.P. Hanes, W.F. Patterson, and J.D. Schall. The role of frontal eye field in countermanding saccades: Visual, movement and fixation activity. Journal of Neurophysiology, 79:817?834, 1998. [9] DP Hanes and JD Schall. Countermanding saccades in macaque. Visual Neuroscience, 12(5):929, 1995. [10] L.A. Leotti and T.D. Wager. Motivational influences on response inhibition measures. J Exp Psychol Hum Percept Perform, 2009. [11] G.D. Logan and W.B. Cowan. On the ability to inhibit thought and action: A theory of an act of control. Psychological Review, 91(3):295?327, 1984. [12] L. Menzies, S. Achard, S.R. Chamberlain, N. Fineberg, C.H. Chen, N. del Campo, B.J. Sahakian, T.W. Robbins, and E. Bullmore. Neurocognitive endophenotypes of obsessivecompulsive disorder. Brain, 130(12):3223, 2007. [13] J.T. Nigg, M.M. Wong, M.M. Martel, J.M. Jester, L.I. Puttler, J.M. Glass, K.M. Adams, H.E. Fitzgerald, and R.A. Zucker. Poor response inhibition as a predictor of problem drinking and illicit drug use in adolescents at risk for alcoholism and other substance use disorders. Journal of Amer Academy of Child & Adolescent Psychiatry, 45(4):468, 2006. [14] M. Pare and D.P. Hanes. Controlled movement processing: superior colliculus activity associated with countermanded saccades. Journal of Neuroscience, 23(16):6480?6489, 2003. [15] V. Stuphorn, J.W. Brown, and J.D. Schall. Role of Supplementary Eye Field in Saccade Initiation: Executive, Not Direct, Control. Journal of Neurophysiology, 103(2):801, 2010. [16] V. Stuphorn and J.D. Schall. Executive control of countermanding saccades by the supplementary eye field. Nature neuroscience, 9(7):925?931, 2006. [17] F. Verbruggen and G.D. Logan. Models of response inhibition in the stop-signal and stopchange paradigms. Neuroscience & Biobehavioral Reviews, 33(5):647?661, 2009. [18] F. Verbruggen and G.D. Logan. Proactive adjustments of response strategies in the stopsignal paradigm. Journal of Experimental Psychology: Human Perception and Performance, 35(3):835?854, 2009. [19] K.F. Wong-Lin, P. Eckhoff, P. Holmes, and J.D. Cohen. Optimal performance in a countermanding saccade task. Brain Research, 2009. [20] AJ Yu and JD Cohen. Sequential effects: Superstition or rational behavior? Advances in Neural Information Processing Systems, 21:1873?1880, 2009. 9
3937 |@word neurophysiology:2 trial:96 version:1 advantageous:1 termination:1 simulation:7 p0:2 pressure:1 harder:1 recursively:1 moment:2 initial:3 subjective:1 reaction:10 past:1 current:6 yet:1 bd:1 must:1 subsequent:1 analytic:1 drop:3 discrimination:5 generative:1 fewer:5 rts:2 slowing:1 provides:1 detecting:1 awry:1 instructs:1 along:2 direct:2 beta:1 consists:1 fixation:1 behavioral:11 manner:3 inter:1 indeed:1 expected:4 behavior:23 brain:8 buying:1 informational:1 automatically:1 window:1 increasing:2 biobehavioral:2 motivational:1 estimating:1 underlying:1 moreover:2 qtw:5 medium:1 qw:3 minimizes:1 monkey:7 informed:1 quantitative:1 continual:1 ti:3 act:2 interactive:1 exactly:4 wrong:1 demonstrates:1 control:23 unit:3 converse:1 partitioning:1 appear:4 shenoy:1 before:4 engineering:1 local:3 modify:3 timing:1 consequence:3 initiated:1 approximately:2 abuse:1 might:2 studied:1 dynamically:3 collect:1 suggests:5 range:2 statistically:2 implement:1 procedure:1 drug:1 thought:3 composite:1 convenient:2 confidence:1 word:1 seeing:1 wait:4 anticipation:2 psychiatric:1 cannot:2 close:1 context:1 influence:5 risk:1 wong:2 equivalent:2 deterministic:2 demonstrated:1 missing:2 yt:6 quick:1 go:56 attention:3 graphically:1 map:1 adolescent:2 simplicity:6 disorder:7 formalized:1 microstimulation:1 pouget:1 q:3 rule:2 insight:2 holmes:1 population:2 increment:1 updated:1 diego:2 pt:14 target:1 infrequent:1 exact:1 programming:3 element:1 crossing:1 recognition:1 trend:1 approximated:3 expensive:2 michie:1 breakdown:1 predicts:1 observed:5 role:2 capture:10 hv:3 inaction:1 decrease:5 movement:3 inhibit:1 neuropsychology:1 balanced:1 substantial:1 pd:5 environment:1 predictable:1 disease:3 reward:9 fitzgerald:1 dynamic:7 depend:6 predictive:4 dissociating:1 patterson:1 basis:1 emergent:2 various:10 represented:1 separated:1 forced:1 distinct:2 detected:2 outcome:2 choosing:2 larger:1 supplementary:5 s:3 otherwise:1 ability:8 withholding:1 statistic:1 g1:5 bullmore:1 noisy:1 deliberation:1 reproduced:2 descriptive:1 sequence:3 interaction:2 rapidly:2 mixing:1 academy:1 inducing:1 normalize:1 p:3 produce:2 adam:1 volatility:2 depending:2 help:1 job:1 implemented:1 c:8 qd:3 convention:1 posit:1 correct:1 stochastic:3 human:8 implementing:2 bin:1 explains:1 require:1 f1:7 adjusted:2 drinking:1 considered:1 ibt:2 exp:1 predict:1 major:1 early:1 smallest:1 healthy:1 sensitive:1 robbins:1 successfully:5 tool:3 aim:1 modified:1 parkinson:1 imprecisely:1 encode:3 bernoulli:3 indicates:1 prevalent:1 psychiatry:2 glass:1 inference:6 stopping:30 typically:1 bt:13 unlikely:1 hidden:1 going:7 unobservable:1 among:2 canceled:5 overall:1 denoted:1 cetera:1 jester:1 animal:3 plan:1 integration:1 field:7 never:1 washington:2 broad:1 yu:2 cancel:3 afc:1 obsessive:2 future:4 superstition:1 mimic:1 stimulus:10 quantitatively:1 inherent:1 intelligent:1 serious:1 manipulated:2 individual:3 hanes:4 fef:1 decisional:1 possibility:1 mixture:1 pradeep:1 held:1 wager:2 rajesh:1 necessary:1 unless:1 taylor:1 timed:1 re:2 causal:1 logan:5 uncertain:1 psychological:3 increased:4 column:2 stopped:1 modeling:1 rao:2 planned:2 instance:1 verbruggen:2 cost:22 rare:1 predictor:1 delay:3 successful:3 johnson:1 too:1 thickness:1 varies:1 chooses:1 drk:2 density:1 sensitivity:1 systematic:1 diverge:1 together:1 quickly:1 earn:1 again:2 reflect:2 choose:3 slowly:1 cognitive:7 external:1 leading:1 account:1 suggesting:1 potential:2 rapport:1 coefficient:2 matter:2 hypothesizes:1 explicitly:3 race:13 depends:3 onset:2 stream:4 later:1 view:1 proactive:1 bayes:3 option:1 complicated:1 minimize:2 ass:1 characteristic:1 percept:1 correspond:1 bayesian:5 critically:1 iid:2 produced:1 monitoring:4 trajectory:5 history:3 explain:5 against:1 frequency:11 naturally:1 associated:10 rational:4 stop:122 gain:1 sampled:1 emerges:1 infers:1 formalize:2 subtle:1 reflecting:1 appears:5 response:29 specify:1 amer:1 evaluated:1 just:1 hand:1 assessment:1 del:1 boucher:2 ptd:10 aj:1 indicated:2 believe:1 multifaceted:1 effect:13 brown:2 evolution:2 adequately:2 iteratively:1 whereby:2 noted:1 demonstrate:1 wise:1 superior:2 neurosurg:1 cohen:2 illness:1 pshenoy:1 numerically:1 stochasticity:2 ssd:10 cancellation:1 stable:1 f0:5 longer:6 zucker:1 inhibition:9 posterior:8 recent:3 showed:1 manipulation:4 certain:1 initiation:1 meta:1 binary:1 success:2 discretizing:1 martel:1 seen:2 responsiveness:1 additional:1 greater:3 campo:1 determine:1 paradigm:3 monotonically:1 signal:62 dashed:1 multiple:1 neurally:1 faster:8 cross:2 long:1 hazard:1 clinical:2 lin:1 deadline:4 schizophrenia:2 qg:2 impact:1 prediction:5 variant:1 basic:2 simplistic:1 controlled:1 patient:2 expectation:3 essentially:2 vision:1 histogram:2 addition:1 source:1 biased:2 subject:19 med:2 elegant:1 db:1 rieger:1 cowan:1 alzheimer:2 near:1 presence:2 backwards:2 enough:1 psychology:5 competing:1 opposite:1 tradeoff:2 translates:1 shift:2 whether:7 penalty:3 proceed:1 action:14 latency:6 se:6 extensively:1 induces:1 processed:6 inhibitory:16 neuroscience:7 estimated:5 arising:1 per:2 incentive:1 waiting:2 group:2 key:1 reformulation:1 threshold:4 nevertheless:1 changing:2 eckhoff:1 diffusion:2 kept:2 merely:1 fraction:10 colliculus:2 run:4 uncertainty:1 respond:1 home:1 decision:21 abnormal:1 ct:1 correspondence:1 activity:3 adapted:3 afforded:1 speed:2 optimality:1 min:3 prescribed:2 performing:4 attempting:1 achard:1 department:3 according:3 poor:1 across:4 slightly:2 smaller:2 making:14 happens:2 s1:1 hl:1 schall:6 intuitively:1 computationally:2 equation:1 turn:1 initiate:1 mind:1 apply:1 fluctuating:2 appropriate:2 chamber:1 occurrence:1 alternative:2 slower:7 rp:1 jd:2 angela:1 top:1 denotes:2 responding:2 include:1 graphical:2 opportunity:3 reconciling:1 instant:2 medicine:1 graded:1 classical:8 g0:4 already:4 question:1 hum:1 strategy:1 dependence:2 rt:10 dp:1 deficit:4 separate:2 attentional:1 capacity:1 majority:1 valuation:2 toward:4 dream:1 assuming:2 index:2 modeled:1 relationship:1 ratio:1 minimizing:1 palmeri:1 p0d:3 stated:1 rise:1 design:1 implementation:2 policy:11 perform:2 vertical:1 observation:9 neuron:1 behave:1 variability:2 delaying:1 committed:1 ucsd:2 varied:2 pzt:1 required:1 california:2 macaque:3 address:1 below:4 perception:1 appeared:4 gaining:1 belief:8 event:1 badcock:1 natural:1 difficulty:1 indicator:1 representing:1 pare:1 eye:7 stuphorn:3 psychol:1 faced:1 prior:4 review:5 evolve:3 relative:5 loss:4 par:1 generation:1 executive:2 agent:1 sufficient:2 consistent:1 principle:1 systematically:3 cd:2 course:1 soon:1 free:1 bias:5 understand:2 face:1 dysfunctional:1 dip:2 complicating:1 withhold:1 cumulative:1 sensory:14 made:3 adaptive:1 san:2 qualitatively:1 far:1 correlate:1 approximate:2 alpha:1 reproduces:3 global:2 summing:1 assumed:1 xi:3 iterative:1 sk:16 why:1 nature:3 chamberlain:1 hrk:1 alcoholism:1 noise:3 arise:1 allowed:1 child:2 x1:3 carpenter:1 fashion:1 slow:3 hyperactivity:3 experienced:1 exceeding:1 explicit:1 exponential:3 house:1 down:4 rk:13 specific:4 xt:17 substance:3 showing:2 dementia:1 normative:4 explored:1 pz:1 decay:1 ajyu:1 neurol:1 evidence:1 intractable:1 sequential:4 execution:1 neurocognitive:1 conditioned:2 demand:5 chen:1 visual:2 adjustment:3 neurological:1 saccade:8 ch:1 corresponds:1 environmental:1 determines:1 compulsive:2 chance:4 ma:1 harris:1 identity:5 goal:1 consequently:2 towards:3 absence:1 adverse:1 change:6 experimentally:1 typical:2 except:1 unpredictably:1 acting:1 principal:1 total:1 experimental:13 buyer:1 indicating:3 internal:1 frontal:2 ongoing:2 requisite:1 phenomenon:1
3,243
3,938
Improvements to the Sequence Memoizer Yee Whye Teh Gatsby Computational Neuroscience Unit University College London London, WC1N 3AR, UK [email protected] Jan Gasthaus Gatsby Computational Neuroscience Unit University College London London, WC1N 3AR, UK [email protected] Abstract The sequence memoizer is a model for sequence data with state-of-the-art performance on language modeling and compression. We propose a number of improvements to the model and inference algorithm, including an enlarged range of hyperparameters, a memory-efficient representation, and inference algorithms operating on the new representation. Our derivations are based on precise definitions of the various processes that will also allow us to provide an elementary proof of the ?mysterious? coagulation and fragmentation properties used in the original paper on the sequence memoizer by Wood et al. (2009). We present some experimental results supporting our improvements. 1 Introduction The sequence memoizer (SM) is a Bayesian nonparametric model for discrete sequence data producing state-of-the-art results for language modeling and compression [1, 2]. It models each symbol of a sequence using a predictive distribution that is conditioned on all previous symbols, and thus can be understood as a non-Markov sequence model. Given the very large (infinite) number of predictive distributions needed to model arbitrary sequences, it is essential that statistical strength be shared in their estimation. To do so, the SM uses a hierarchical Pitman-Yor process prior over the predictive distributions [3]. One innovation of the SM over [3] is its use of coagulation and fragmentation properties [4, 5] that allow for efficient representation of the model using a data structure whose size is linear in the sequence length. However, in order to make use of these properties, all concentration parameters, which were allowed to vary freely in [3], were fixed to zero. In this paper we explore a number of further innovations to the SM. Firstly, we propose a more flexible setting of the hyperparameters with potentially non-zero concentration parameters that still allow the use of the coagulation/fragmentation properties. In addition to better predictive performance, the setting also partially mitigates a problem observed in [1], whereby on encountering a long sequence of the same symbol, the model becomes overly confident that it will continue with the same symbol. The second innovation addresses memory usage issues in inference algorithms for the SM. In particular, current algorithms use a Chinese restaurant franchise representation for the HPYP, where the seating arrangement of customers in each restaurant is represented by a list, each entry being the number of customers sitting around one table [3]. This is already an improvement over the na??ve Chinese restaurant franchise in [6] which stores pointers from customers to the tables they sit at, but can still lead to huge memory requirements when restaurants contain many tables. One approach to mitigate this problem has been explored in [7], which uses a representation that stores a histogram of table sizes instead of the table sizes themselves. Our proposal is to store even less, namely only the minimal statistics about each restaurant required to make predictions: the number of customers and the number of tables occupied by the customers. Inference algorithms will have to be adapted to this compact representation, and we describe and compare a number of these. 1 In Section 2 we will give precise definitions of Pitman-Yor processes and Chinese restaurant processes. These will be used to define the SM model in Section 3, and to derive the results about the extended hyperparameter setting in Section 4 and the memory-efficient representation in Section 5. As a side benefit we will also be able to give an elementary proof of the coagulation and fragmentation properties in Section 4, which was presented as a fait accompli in [1], while the general and rigorous treatment in the original papers [4, 5] is somewhat inaccessible to a wider audience. 2 Pitman-Yor Processes and Chinese Restaurant Processes A Pitman-Yor process (PYP) is a particular distribution over distributions over some probability space ? [8, 9]. We denote by PY(?, d, G0 ) a PYP with concentration parameter ? > ?d, discount parameter d ? [0, 1), and base distribution G0 over ?. We can describe a Pitman-Yor process using its associated Chinese restaurant process (CRP). A Chinese restaurant has customers sitting around tables which serve dishes. If there are c customers we index them with [c] = {1, . . . , c}. We define a seating arrangement of the customers as a set of disjoint non-empty subsets partitioning [c]. Each subset is a table and consists of the customers sitting around it, e.g. {{1, 3}, {2}} means customers 1 and 3 sit at one table and customer 2 sits at another by itself. Let Ac be the set of seating arrangements of c customers, and Act those with exactly t tables. The CRP describes a distribution over seating arrangements as follows: customer 1 sits at a table; for customer c + 1, if A ? Ac is the current seating arrangement, then she joins a table a ? A with probability |a|?d ?+c and starts a new table with probability ?+|A|d ?+c . We denote the resulting distribution over Ac as CRPc (?, d). Multiplying the conditional probabilities together, |A|?1 Y [? + d]d |a|?1 [1 ? d]1 for each A ? Ac , (1) P (A) = [? + 1]c?1 1 a?A Qn?1 where [y]nd = i=0 y + id is Kramp?s symbol. Note that the denominator is the normalization constant. Fixing the number of tables to be t ? c, the distribution, denoted as CRPct (d), becomes: Q |a|?1 [1 ? d]1 P (A) = a?A for each A ? Act , (2) Sd (c, t) P Q |a|?1 where the normalization constant Sd (c, t) = A?Act a?A [1 ? d]1 is a generalized Stirling number of type (?1, ?d, 0) [10]. These can be computed recursively [3] (see also Section 5). Note that conditioning on a fixed t the seating arrangement will not depend on ?, only on d. iid Suppose G ? PY(?, d, G0 ) and z1 , . . . , zc |G ? G. The CRP describes the PYP in terms of its effect on z1:c = z1 , . . . , zc . In particular, marginalizing out G, the distribution of z1:c can be described as follows: draw A ? CRPc (?, d), on each table serve a dish which is an iid draw from G0 , finally let variable zi take on the value of the dish served at the table that customer i sat at. Now suppose we wish to perform inference given observation of z1:c . This is equivalent to conditioning on the dishes that each customer is served. Since customers at the same table are served the same dish, the different values among the zi ?s split the restaurant into multiple sections, with customers and tables in each section being served a distinct dish. There can be more than one table in each section since multiple tables can serve the same dish (if G0 has atoms). If s ? ? is a dish, let cs be the number of zi ?s with value s (number of customers served dish s), ts the number of tables, and As ? Acs ts the seating arrangement of customers around the tables serving dish s (we reindex the cs customers to be [cs ]). The joint distribution over seating arrangements and observations is then:1 ! ! Y [? + d]td? ?1 Y Y |a|?1 ts P ({cs , ts , As }, z1:c ) = G0 (s) [1 ? d]1 , (3) [? + 1]1c? ?1 s?? a?A s?? s P where t? = s?? ts and similarly for c? .We can marginalize out {As } from (3) using (2): ! ! Y [? + d]td? ?1 Y ts P ({cs , ts }, z1:c ) = G0 (s) Sd (cs , ts ) . (4) [? + 1]c1? ?1 s?? s?? Inference then amounts to computing the posterior of either {ts , As } or only {ts } given z1:c (cs are fixed) and can be achieved by Gibbs sampling or other means. 1 We have omitted the set subscript {?}s?? . We will drop these subscripts when they are clear from context. 2 3 The Sequence Memoizer and its Chinese Restaurant Representation In this section we review the sequence memoizer (SM) and its representation using Chinese restaurants [3, 11, 1, 2]. Let ? be the discrete set of symbols making up the sequences to be modeled, and let ?? be the set of finite sequences of symbols from ?. The SM models a sequence x1:T = x1 , x2 , . . . , xT ? ?? using a set of conditional distributions: P (x1:T ) = T Y P (xi |x1:i?1 ) = i=1 T Y Gx1:i?1 (xi ), (5) i=1 where Gu (s) is the conditional probability of the symbol s ? ? occurring after a context u ? ?? (the sequence of symbols occurring before s). The parameters of the model consist of all the conditional distributions {Gu }u??? , and are given a hierarchical Pitman-Yor process (HPYP) prior: G? ? PY(?? , d? , H) Gu |G?(u) ? PY(?u , du , G?(u) ) for u ? ?? \{?}, (6) where ? is the empty sequence, ?(u) is the sequence obtained by dropping the first symbol in u, and H is the overall base distribution over ? (we take H to be uniform over a finite ?). Note that we have generalized the model to allow each Gu to have its own concentration and discount parameters, whereas [1, 2] worked with ?u = 0 and du = d|u| (i.e. context length-dependent discounts). As in previous works, the hierarchy over {Gu } is represented using a Chinese restaurant franchise [6]. Each Gu has a corresponding restaurant indexed by u. Customers in the restaurant are draws from Gu , tables are draws from its base distribution G?(u) , and dishes are the drawn values from ?. For each s ? ? and u ? ?? , let cus and tus be the numbers of customers and tables in restaurant u served dish s, and let Aus ? Acus tus be their seating arrangement. Each observation of xi in context x1:i?1 corresponds to a customer in restaurant x1:i?1 who is served dish xi , and each table in each restaurant u, being a draw from the base distribution G?(u) , corresponds to a customer in the parent restaurant ?(u). Thus, the numbers of customers and tables have to satisfy the constraints X cus = cxus + tvs , (7) v:?(v)=u where cxus = 1 if s = xi and u = x1:i?1 for some i, and 0 otherwise. The goal of inference is to compute the posterior over the states {cus , tus , Aus }s??,u??? of the restaurants (and possibly the concentration and discount parameters). The joint distribution can be obtained by multiplying the probabilities of all seating arrangements (3) in all restaurants: ! ! Y Y [?u + du ]dtu? ?1 Y Y |a|?1 t?s u P ({cus , tus , Aus }, x1:T ) = H(s) [1 ? du ]1 . [?u + 1]1cu? ?1 s?? a?A s?? u??? us (8) The first parentheses contain the probability of draws from the overall base distribution H, and the second parentheses contain the probability of the seating arrangement in restaurant u. Given a state of the restaurants drawn from the posterior, the predictive probability of symbol s in context v can ? then be computed recursively (with P?(?) (s) defined to be H(s)): Pv? (s) = 4 cvs ? tvs d ?v + tv? d ? (s). + P ?v + cv? ?v + cv? ?(v) (9) Non-zero Concentration Parameters In [1] the authors proposed setting all the concentration parameters to zero. Though limiting the flexibility of the model, this allowed them to take advantage of coagulation and fragmentation properties of PYPs [4, 5] to marginalize out all but a linear number (in T ) of restaurants from the hierarchy. We propose the following enlarged family of hyperparameter settings: let ?? = ? > 0 be free to vary at the root of the hierarchy, and set each ?u = ??(u) du for each u ? ?? \{?}. The 3 A1 C a1 A2 Fa1 a2 Figure 1: Illustration of the relationship between the restaurants A1 , A2 , C and Fa . Fa2 discounts can vary freely. In addition to more flexible modeling, this also partially mitigates the overconfidence problem [2]. To see why, notice from (9) that the predictive probability is a weighted average of predictive probabilities given contexts of various lengths. Since ?v > 0, the model gives higher weights to the predictive probabilities of shorter contexts (compared to ?v = 0). These typically give less extreme values since they include influences not just from the sequence of identical symbols, but also from other observations of other symbols in other contexts. Our hyperparameter settings also retain the coagulation and fragmentation properties which allow us to marginalize out many PYPs in the hierarchy for efficient inference. We will provide an elementary proof of these results in terms of CRPs in the following. First we describe the coagulation and fragmentation operations. Let c ? 1 and suppose A2 ? Ac and A1 ? A|A2 | are two seating arrangements where the number of customers in A1 is the same as that of tables in A2 . Each customer in A1 can be put in one-to-one correspondence to a table in A2 and sits at a table in A1 . Now consider re-representing A1 and A2 . Let C ? Ac be the seating arrangement obtained by coagulating (merging) tables of A2 corresponding to customers in A1 sitting at the same table. Further, split A2 into sections, one for each table a ? C, where each section Fa ? A|a| contains the |a| customers and tables merged to make up a. The converse of coagulating tables of A2 into C is of course to fragment each table a ? C into the smaller tables in Fa . Note that there is a one-to-one correspondence between tables in C and in A1 , and the number of customers in each table of A1 is that of tables in the corresponding Fa . Thus A1 and A2 can be reconstructed from C and {Fa }a?C . Theorem 1 ([4, 5]). Suppose A2 ? Ac , A1 ? A|A2 | , C ? Ac and Fa ? A|a| for each a ? C are related as above. Then the following describe equivalent distributions: (I) A2 ? CRPc (?d2 , d2 ) and A1 |A2 ? CRP|A2 | (?, d1 ). (II) C ? CRPc (?d2 , d1 d2 ) and Fa |C ? CRP|a| (?d1 d2 , d2 ) for each a ? C. Proof. We simply show that the joint distributions are the same. Starting with (I) and using (1),    |A |?1 |A |?1 Y Y [? + d1 ]d1 1 [?d2 + d2 ]d2 2 |a|?1 |b|?1 P (A1 , A2 ) = [1 ? d1 ]1 [1 ? d2 ]1 |A |?1 [?d2 + 1]c?1 [? + 1]1 2 1 a?A1 b?A2  Y  |A |?1  Y [?d2 + d1 d2 ]d1 d1 2 |a|?1 |b|?1 = [d ? d d ] [1 ? d ] . 2 1 2 2 1 d2 [?d2 + 1]c?1 1 a?A b?A 1 2 We used the identity [?? + ?]?n?1 = ? n?1 [? + 1]n?1 for all ?, ?, n. Re-grouping the products and 1 expressing the same quantities in terms of C and {Fa }, |C|?1 = [?d2 + d1 d2 ]d1 d2 [?d2 + 1]c?1 1 Y |F |?1 [d2 ? d1 d2 ]d2a a?C Y |b|?1 [1 ? d2 ]1  = P (C, {Fa }a?C ). b?Fa We see that conditioning on C each Fa ? CRP|a| (?d1 d2 , d2 ). Marginalizing {Fa } out using (1), |C|?1 P (C) = [?d2 + d1 d2 ]d1 d2 [?d2 + 1]c?1 1 Y |a|?1 [1 ? d1 d2 ]1 . a?C So C ? CRPc (?d2 , d1 d2 ) and (I)?(II). Reversing the same argument shows that (II)?(I). Statement (I) of the theorem is exactly the Chinese restaurant franchise of the hierarchical model G1 |G0 ? PY(?, d1 , G0 ), G2 |G1 ? PY(?d2 , d2 , G1 ) with c iid draws from G2 . The theorem shows 4 that the clustering structure of the c customers in the franchise is equivalent to the seating arrangement in a CRP with parameters ?d2 , d1 d2 , i.e. G2 |G0 ? PY(?d2 , d1 d2 , G0 ) with G1 marginalized out. Conversely, the fragmentation operation (II) regains Chinese restaurant representations for both G2 |G1 and G1 |G0 from one for G2 |G0 . This result can be applied to marginalize out all but a linear number of PYPs from (6) [1]. The resulting model is still a HPYP of the same form as (6), except that it only need be defined over the prefixes of x1:T as well as some subset of their ancestors. In the rest of this paper we will refer to (6) and its Chinese restaurant franchise representation (8) with the understanding that we are operating in this reduced hierarchy. Let U denote the reduced set of contexts, and redefine ?(u) to be the parent of u in U. The concentration and discount parameters need to be modified accordingly. 5 Compact Representation Current inference algorithms for the SM and hierarchical Pitman-Yor processes operate in the Chinese restaurant franchise representation, and use either Gibbs sampling [3, 11, 1] or particle filtering [2]. To lower memory requirements, instead of storing the precise seating arrangement of each restaurant, the algorithms only store the numbers of customers, numbers of tables and sizes of all tables in the franchise. This is sufficient for sampling and for prediction. However, for large data sets the amount of memory required to store the sizes of the tables can still be very large. We propose algorithms that only store the numbers of customers and tables but not the table sizes. This compact representation needs to store only two integers (cus , tus ) per context/symbol pair, as opposed to tus integers.2 These counts are already sufficient for prediction, as (9) does not depend on the table sizes. We will also consider a number of sampling algorithms in this representation. Our starting point is the joint distribution over the Chinese restaurant franchise (8). Integrating out the seating arrangements {Aus } using (2) gives the joint distribution over {cus , tus }: ! ! Y Y [?u + du ]dtu? ?1 Y t?s u P ({cus , tus }, x1:T ) = H(s) Sdu (cus , tus ) . (10) [?u + 1]1cu? ?1 s?? s?? u?U Note that each cus is in fact determined by (7) so in fact the only unobserved variables in (10) are {tus }. With this joint distribution we can now derive various sampling algorithms. 5.1 Sampling Algorithms Direct Gibbs Sampling of {cus , tus }. It is straightforward derive a Gibbs sampler from (10). Since each cus is determined by cxus and the tvs at child restaurants v, it is sufficient to update each tus , which for tus in the range {1, . . . , cus } has conditional distribution P (tus |rest) ? ?1 [?u + du ]tdu? u c [??(u) + 1]1?(u)? ?1 Sdu (cus , tus )Sd?(u) (c?(u)s , t?(u)s ), (11) where tu? , c?(u)? and c?(u)s all depend on tus through the constraints (7). One problem with this sampler is that we need to compute Sdu (c, t) for all 1 ? c, t ? cus . If du is fixed these can be precomputed and stored, but the resulting memory requirement is again large since each restaurant typically has its own du value. If du is updated in the sampling, then these will need to be computed each time as well, costing O(c2us ) per iteration. Further, Sd (c, t) typically has very high dynamic range, so care has to be taken to avoid numerical under-/overflow (e.g. by performing the computations in the log domain, involving many expensive log and exp computations). Re-instantiating Seating Arrangements. Another strategy is to re-instantiate the seating arrangement by sampling Aus ? CRPcus tus (du ) from its conditional distribution given cus , tus (see Section 5.2 below), then performing the original Gibbs sampling of seating arrangements [3, 11]. This produces a new number of tables tus and the seating arrangement can be discarded. Note however that when tus changes this sampler will introduce changes to ancestor restaurants (by adding 2 In both representations one may also want to store the total number of customers and tables in each restaurant for efficiency. In practice, where there is additional overhead due to the data structures involved, storage space for the full representation can be reduced by treating context/symbol pairs with only one customer separately. 5 or removing customers), so these will need to have their seating arrangements instantiated as well. To implement this sampler efficiently, we visit restaurants in depth-first order, keeping in memory only the seating arrangements of all restaurants on the path to the current one. The computational cost is O(cus tus ), but with a potentially smaller hidden constant (no log/exp computations are required). Original Gibbs Sampling of {cus , tus }. A third strategy is to ?imagine? having a seating arrangement and running the original Gibbs sampler, incrementing tus if a table would have been created, and decrementing tus if a table would have been deleted. Recall that the original Gibbs sampler operates by iterating over customers, treating each as the last customer in the restaurant, removing it, then adding it back into the restaurant. When removing, if the customer were sitting by himself, a table would need to be deleted too, so the probability of decrementing tus is the probability of a customer sitting by himself. From (2), this can be worked out to be Sd (cus ? 1, tus ? 1) P (decrement tus ) = u . (12) Sdu (cus , tus ) The numerator is due to a sum over all seating arrangements where the other cus ? 1 customers sit at the other tus ? 1 tables. When adding back the customer, the probability of incrementing the number of tables is the probability that the customer sits at a new table of the same dish s: ? (?u + du tu? )P?(u) (s) P (increment tus ) = , (13) ? (?u + du tu? )P?(u) (s) + cus ? tus du ? where P?(u) (s) is the predictive (9) with the current value of tus , and cus , tus are values with the customer removed. This sampler also requires computation of Sdu (c, t), but only for 1 ? t ? tus which can be significantly smaller than cus . Computation cost is O(cus tus ) (but again with a larger constant due to computing the Stirling numbers in a stable way). We did not find a sampling method taking less time than O(cus tus ). Particle Filtering. (13) gives the probability of incrementing tus (and adding a customer to the parent restaurant) when a customer is added into a restaurant. This can be used as the basis for a particle filter, which iterates through the sequence x1:T , adding a customer corresponding to s = xi in context u = x1:i?1 at each step. Since no customer deletion is required, the cost is very small: just O(cus ) for the cus customers per s and u (plus the cost of traversing the hierarchy to the current restaurant, which is always necessary). Particle filtering works very well in online settings, e.g. compression [2], and as initialization for Gibbs sampling. 5.2 Re-instantiating Aus given cus , tus To simplify notation, here we will let d = du , c = cus , t = tus and A = Aus ? Act . We will use the forward-backward algorithm in an undirected chain to sample A from CRPct (d) given in (2). First we re-express A using two sets of variables z1 , . . . , zc and y1 , . . . , yc . Label a table a ? A using the index of the first customer at the table, i.e. the smallest element of a. Let zi be the number of tables occupied by the first i customers, and yi the label of the table that customer i sits at. The variables satisfy the following constraints: z1 = 1, zc = t, and zi = zi?1 in which case yi ? [i ? 1] or zi = zi?1 + 1 in which case yi = i. This gives a one-to-one correspondence between seating arrangements in Act and settings of the variables satisfying the above constraints. Consider the following distribution over the variables satisfying the constraints: z1 , . . . , zc is distributed according to a Markov network with z1 = 1, zc = t, and edge potentials: ? ?i ? 1 ? zi d if zi = zi?1 , f (zi , zi?1 ) = 1 (14) if zi = zi?1 + 1, ? 0 otherwise. It is easy to see that the normalization constant is simply Sd (c, t) and Q i:zi =zi?1 (i ? 1 ? zi d) P (z1:c ) = . Sd (c, t) Given z1:c , we give each yi the following distribution conditioned on y1:i?1 : ( 1 if yi = i and zi = zi?1 + 1, P (yi |z1:c , y1:i?1 ) = Pi?1 j=1 1(yj =yi )?d if zi = zi?1 and yi ? [i ? 1]. i?1?zi d 6 (15) (16) 7 Calgary: news x 10 8 context/symbol pairs tables (sampling) tables (particle filter) 2 Brown corpus x 10 Brown corpus 40 context/symbol pairs tables (sampling) tables (particle filter) 6 Seconds per iteration 7 2.5 1.5 4 1 2 0.5 0 0 1 2 Input size (a) 0 3 5 x 10 0 2 4 Input size (b) 6 8 5 x 10 Original Re?instantiating 30 20 10 0 0 2 4 Input size 6 8 5 x 10 (c) Figure 2: (a), (b) Number of context/symbol pairs and total number of tables (counted after particle filter initialization and 10 sampling iterations using the compact original sampler) as a function input size. Subfigure (a) shows the counts obtained from a byte-level model of the news file in the Calgary corpus, whereas (b) shows the counts for word-level model of the Brown corpus (training set). The space required for the compact representation is proportional to the number of context/symbol pairs, whereas for the full representation it is proportional to the number of tables. Note also that sampling tends to increase the number of tables over the particle filter initialization. (c) Time per iteration (seconds) as a function of input size for the original Gibbs sampler in the compact representation and the re-instantiating sampler (on the Brown corpus). Multiplying all the probabilities together, we see that P (z1:c , y1:c ) is exactly equal to P (A) in (2). Thus we can sample A by first sampling z1:c from (15), then each yi conditioned on previous ones using (16), and converting this representation into A. We use a backward-filtering-forward-sampling algorithm to sample z1:c , as this avoids numerical underflow problems that can arise when using forward-filtering. Backward-filtering avoids these problems by incorporating the constraint that zc has to equal t into the messages from the beginning. Fragmenting a Restaurant. In particle filtering and in prediction, we often need to re-instantiate a restaurant which was previously marginalized out. We can do so by sampling Aus given cus , tus for each s, then fragmenting each Aus using Theorem 1, counting the resulting numbers of customers and tables, then forgetting the seating arrangements. 6 Experiments In order to evaluate the proposed improvements in terms of reduced memory requirements and to compare the performance of the different sampling schemes we performed three sets of experiments.3 In the first experiment we evaluated the potential space saving due to the compact representation. Figure 2 shows the number of context/symbol pairs and the total number of tables as a function of data set size. While the difference does not seem dramatic, there is still a significant amount of memory that can be saved by using the compact representation, as there is no additional overhead and memory fragmentation due to variable-size arrays. The comparison between the byte-level model and the word-level model in Figure 2 also demonstrates that the compact representation saves more space when |?| is small (which leads to context/symbol pairs having larger cus ?s and tus ?s). Finally, Figure 2 illustrates another interesting effect: the number of tables is generally larger after a few iterations of Gibbs sampling have been performed after the initialization using a single-particle particle filter [2]. The second experiment compares the computational cost of the compact original sampler and the sampler that re-instantiates full seating arrangements. The main computational cost of the original sampler is computing the ratio (12), while sampling the seating arrangements is the main computational cost of the re-instantiating sampler. Figure 2(c) shows the time needed for one iteration of Gibbs sampling as a function of data set size. The re-instantiating sampler is found to be much more efficient, as it avoids the overhead involved in computing the Stirling numbers in a stable manner (e.g. log/exp computations). For the original sampler, time can be traded off with space 3 All experiments were performed on two data sets: the news file from the Calgary corpus (modeled as a sequence of 377,109 bytes; |?| = 256), and the Brown corpus (preprocessed as in [12]), modeled as a sequence of words (800,000 words training set; 181,041 words test set; |?| = 16383). Following [1], the discount parameters were fixed to .62, .69, .74, .80 for the first 4 levels and .95 for all subsequent levels of the hierarchy. 7 ? 0 1 3 10 20 50 Particle Filter only Fragment Parent 8.45 8.41 8.41 8.39 8.37 8.37 8.33 8.34 8.32 8.33 8.32 8.33 Gibbs (1 sample) Fragment Parent 8.44 8.41 8.40 8.39 8.37 8.37 8.33 8.33 8.32 8.32 8.31 8.32 Gibbs (50 samples averaged) Fragment Parent 8.43 8.39 8.39 8.38 8.35 8.35 8.32 8.32 8.31 8.31 8.31 8.31 Online PF Gibbs 8.04 8.04 8.01 8.01 7.98 7.98 7.95 7.94 7.94 7.94 7.95 7.95 Table 1: Average log-loss on the Brown corpus (test set) for different values of ?, different inference strategies, and different modes of prediction. Inference is performed by either just using the particle filter or using the particle filter followed by 50 burn-in iterations of Gibbs sampling. Subsequently either 1 or 50 samples are collected for prediction. Prediction is performed either using fragmentation or by predicting from the parent node. The final two columns labelled Online show the results obtained by using the particle filter on the test set as well, after training with either just the particle filter or particle filter followed by 50 Gibbs iterations. Non-zero values of ? can be seen to provide a significant increase in perfomance, while the gains due to averaging samples or proper fragmentation during prediction are small. by tabulating all required Stirling numbers along the path down the tree (as was done in these experiments). However, this leads to an additional memory overhead that mostly undoes any savings from the compact representation. The third set of experiments uses the re-instantiating sampler and compares different modes of prediction and the effect of the non-zero concentration parameter. The results are shown in Table 1. Predictions with the SM can be made in several different ways. After obtaining one or more samples from the posterior distribution over customers and tables (either using particle filtering or Gibbs sampling on the training set) one has a choice of either using particle filtering on the test set as well (online setting), or making predictions while keeping the model fixed. One also has a choice when making predictions involving contexts that were marginalized out from the model: one can either re-instantiate these contexts by fragmentation or simply predict from the parent (or even the child) of the required node. While one ultimately wants to average predictions over the posterior distribution, one may consider using just a single sample for computational reasons. 7 Discussion In this paper we proposed an enlarged set of hyperparameters for the sequence memoizer that retains the coagulation/fragmentation properties important for efficient inference, and we proposed a new minimal representation of the Chinese restaurant processes to reduce the memory requirement of the sequence memoizer. We developed novel inference algorithms for the new representation, and presented experimental results exploring their behaviors. We found that the algorithm which re-instantiates seating arrangements is significantly more efficient than the other two Gibbs samplers, while particle filtering is most efficient but produces slightly worse predictions. Along the way, we formalized the metaphorical language often used to describe Chinese restaurant processes in the machine learning literature, and were able to provide an elementary proof of the coagulation/fragmentation properties. We believe this more precise language will be of use to researchers interested in hierarchical Dirichlet processes and its various generalizations. We are currently exploring methods to compute or approximate the generalized Stirling numbers, and efficient methods to optimize the hyperparameters in the sequence memoizer. A parting remark is that the posterior distribution over {cus , tus } in (10) is in the form of a standard Markov network with sum constraints (7). Thus other inference algorithms like loopy belief propagation or variational inference can potentially be applied. There are however two difficulties to be resolved before these are possible: the large domains of the variables, and the large dynamic ranges of the factors. Acknowledgments We would like to thank the Gatsby Charitable Foundation for generous funding. 8 References [1] F. Wood, C. Archambeau, J. Gasthaus, L. F. James, and Y. W. Teh. A stochastic memoizer for sequence data. In Proceedings of the International Conference on Machine Learning, volume 26, pages 1129?1136, 2009. [2] J. Gasthaus, F. Wood, and Y. W. Teh. Lossless compression based on the Sequence Memoizer. In James A. Storer and Michael W. Marcellin, editors, Data Compression Conference, pages 337?345, Los Alamitos, CA, USA, 2010. IEEE Computer Society. [3] Y. W. Teh. A Bayesian interpretation of interpolated Kneser-Ney. Technical Report TRA2/06, School of Computing, National University of Singapore, 2006. [4] J. Pitman. Coalescents with multiple collisions. Annals of Probability, 27:1870?1902, 1999. [5] M. W. Ho, L. F. James, and J. W. Lau. Coagulation fragmentation laws induced by general coagulations of two-parameter Poisson-Dirichlet processes. http://arxiv.org/abs/math.PR/0601608, 2006. [6] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566?1581, 2006. [7] P. Blunsom, T. Cohn, S. Goldwater, and M. Johnson. A note on the implementation of hierarchical Dirichlet processes. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 337?340, Suntec, Singapore, August 2009. Association for Computational Linguistics. [8] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25:855?900, 1997. [9] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 96(453):161?173, 2001. [10] L. C. Hsu and P. J.-S. Shiue. A unified approach to generalized Stirling numbers. Advances in Applied Mathematics, 20:366?384, 1998. [11] Y. W. Teh. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 985?992, 2006. [12] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137?1155, 2003. 9
3938 |@word cu:34 compression:5 nd:1 d2:37 dramatic:1 recursively:2 contains:1 fragment:4 prefix:1 current:6 subsequent:1 numerical:2 drop:1 treating:2 update:1 instantiate:3 accordingly:1 beginning:1 short:1 pointer:1 memoizer:11 blei:1 iterates:1 math:1 node:2 sits:5 coagulation:11 firstly:1 org:1 along:2 direct:1 consists:1 overhead:4 redefine:1 manner:1 introduce:1 forgetting:1 behavior:1 themselves:1 td:2 pf:1 becomes:2 notation:1 developed:1 unified:1 unobserved:1 mitigate:1 act:5 exactly:3 demonstrates:1 uk:4 partitioning:1 unit:2 converse:1 stick:1 producing:1 before:2 understood:1 sd:8 tends:1 id:1 subscript:2 path:2 kneser:1 blunsom:1 plus:1 burn:1 initialization:4 au:9 acl:1 conversely:1 archambeau:1 range:4 averaged:1 fragmenting:2 acknowledgment:1 yj:1 practice:1 implement:1 jan:1 significantly:2 word:5 integrating:1 marginalize:4 put:1 context:20 influence:1 storage:1 yee:1 py:7 optimize:1 equivalent:3 customer:55 straightforward:1 starting:2 fa2:1 formalized:1 array:1 increment:1 limiting:1 updated:1 hierarchy:7 suppose:4 imagine:1 annals:2 us:3 element:1 expensive:1 satisfying:2 observed:1 news:3 removed:1 inaccessible:1 dynamic:2 ultimately:1 depend:3 predictive:9 serve:3 efficiency:1 basis:1 gu:7 resolved:1 joint:6 various:4 represented:2 derivation:1 distinct:1 instantiated:1 describe:5 london:4 whose:1 larger:3 ducharme:1 otherwise:2 coalescents:1 storer:1 statistic:1 g1:6 itself:1 final:1 online:4 beal:1 sequence:28 advantage:1 ucl:2 propose:4 product:1 tu:3 flexibility:1 los:1 parent:8 empty:2 requirement:5 produce:2 franchise:9 wider:1 derive:3 ac:11 fixing:1 school:1 c:7 merged:1 saved:1 filter:12 subsequently:1 stochastic:1 generalization:1 elementary:4 exploring:2 ijcnlp:1 around:4 exp:3 predict:1 traded:1 vary:3 generous:1 a2:19 omitted:1 smallest:1 estimation:1 label:2 currently:1 weighted:1 always:1 modified:1 occupied:2 avoid:1 derived:1 improvement:5 she:1 rigorous:1 inference:15 dependent:1 jauvin:1 typically:3 hidden:1 ancestor:2 interested:1 issue:1 among:1 flexible:2 overall:2 denoted:1 art:2 equal:2 pyps:3 having:2 saving:2 atom:1 sampling:27 identical:1 report:1 simplify:1 few:1 pyp:3 ve:1 national:1 ab:1 huge:1 message:1 extreme:1 wc1n:2 chain:1 edge:1 necessary:1 shorter:1 traversing:1 indexed:1 tree:1 re:15 subfigure:1 minimal:2 column:1 modeling:3 ar:2 retains:1 stirling:6 loopy:1 cost:7 entry:1 subset:3 uniform:1 johnson:1 too:1 stored:1 confident:1 st:1 international:2 retain:1 probabilistic:1 off:1 michael:1 together:2 na:1 again:2 opposed:1 possibly:1 worse:1 american:2 potential:2 satisfy:2 hpyp:3 performed:5 root:1 start:1 who:1 efficiently:1 sitting:6 goldwater:1 bayesian:3 vincent:1 iid:3 multiplying:3 served:7 researcher:1 fa1:1 definition:2 mysterious:1 involved:2 james:4 proof:5 associated:1 gain:1 hsu:1 treatment:1 recall:1 back:2 higher:1 evaluated:1 though:1 done:1 just:5 crp:8 cohn:1 propagation:1 mode:2 believe:1 tabulating:1 usa:1 effect:3 usage:1 contain:3 brown:6 numerator:1 during:1 subordinator:1 whereby:1 generalized:4 whye:1 variational:1 novel:1 funding:1 conditioning:3 volume:1 perfomance:1 interpretation:1 association:4 expressing:1 refer:1 significant:2 gibbs:20 cv:3 mathematics:1 similarly:1 particle:20 language:6 stable:3 encountering:1 operating:2 metaphorical:1 base:5 posterior:6 own:2 dish:14 store:8 continue:1 meeting:1 yi:9 seen:1 additional:3 somewhat:1 care:1 freely:2 converting:1 ii:4 multiple:3 full:3 technical:1 long:1 visit:1 a1:16 parenthesis:2 prediction:14 involving:2 instantiating:7 denominator:1 himself:2 poisson:2 arxiv:1 histogram:1 normalization:3 iteration:8 achieved:1 audience:1 proposal:1 addition:2 c1:1 whereas:3 want:2 separately:1 rest:2 operate:1 file:2 induced:1 undirected:1 coagulating:2 seem:1 jordan:1 integer:2 counting:1 split:2 easy:1 bengio:1 restaurant:46 zi:23 reduce:1 tus:42 remark:1 generally:1 iterating:1 clear:1 collision:1 amount:3 nonparametric:1 discount:7 reduced:4 http:1 singapore:2 notice:1 neuroscience:2 overly:1 disjoint:1 per:5 serving:1 discrete:2 hyperparameter:3 dropping:1 express:1 drawn:2 deleted:2 costing:1 preprocessed:1 backward:3 regains:1 wood:3 sum:2 family:1 draw:7 followed:2 correspondence:3 annual:1 strength:1 adapted:1 constraint:7 worked:2 x2:1 ywteh:1 interpolated:1 argument:1 performing:2 tv:4 according:1 instantiates:2 describes:2 smaller:3 slightly:1 making:3 lau:1 fait:1 pr:1 taken:1 previously:1 count:3 precomputed:1 needed:2 operation:2 ishwaran:1 hierarchical:8 ney:1 save:1 ho:1 original:12 clustering:1 include:1 running:1 dirichlet:5 linguistics:3 marginalized:3 chinese:16 overflow:1 society:1 g0:13 arrangement:29 already:2 quantity:1 added:1 fa:12 concentration:9 strategy:3 alamitos:1 d2a:1 thank:1 seating:29 collected:1 reason:1 length:3 index:2 modeled:3 illustration:1 gx1:1 relationship:1 ratio:1 innovation:3 mostly:1 potentially:3 statement:1 implementation:1 proper:1 perform:1 teh:6 observation:4 markov:3 sm:10 discarded:1 finite:2 t:10 supporting:1 extended:1 precise:4 y1:4 gasthaus:4 arbitrary:1 august:1 calgary:3 namely:1 required:7 pair:8 z1:18 deletion:1 address:1 able:2 below:1 yc:1 including:1 memory:13 belief:1 difficulty:1 predicting:1 representing:1 scheme:1 lossless:1 dtu:2 created:1 byte:3 prior:3 review:1 understanding:1 literature:1 marginalizing:2 law:1 loss:1 interesting:1 filtering:10 proportional:2 foundation:1 sufficient:3 editor:1 charitable:1 storing:1 pi:1 course:1 last:1 free:1 keeping:2 zc:7 side:1 allow:5 taking:1 undoes:1 pitman:10 yor:9 benefit:1 distributed:1 depth:1 avoids:3 qn:1 author:1 forward:3 made:1 counted:1 reconstructed:1 approximate:1 compact:11 sat:1 corpus:8 decrementing:2 xi:6 why:1 table:70 ca:1 obtaining:1 du:15 domain:2 did:1 main:2 decrement:1 incrementing:3 hyperparameters:4 arise:1 tra2:1 allowed:2 child:2 x1:12 enlarged:3 overconfidence:1 join:1 gatsby:5 pv:1 wish:1 suntec:1 breaking:1 third:2 theorem:4 removing:3 down:1 xt:1 mitigates:2 symbol:21 list:1 explored:1 sit:3 essential:1 consist:1 grouping:1 incorporating:1 merging:1 adding:5 fragmentation:15 conditioned:3 occurring:2 illustrates:1 simply:3 explore:1 partially:2 g2:5 corresponds:2 underflow:1 conditional:6 goal:1 identity:1 labelled:1 shared:1 change:2 infinite:1 except:1 determined:2 reversing:1 sampler:18 operates:1 averaging:1 total:3 experimental:2 college:2 evaluate:1 d1:20
3,244
3,939
Attractor Dynamics with Synaptic Depression C. C. Alan Fung, K. Y. Michael Wong Hong Kong University of Science and Technology, Hong Kong, China [email protected], [email protected] He Wang Tsinghua University, Beijing, China [email protected] Si Wu Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China [email protected] Abstract Neuronal connection weights exhibit short-term depression (STD). The present study investigates the impact of STD on the dynamics of a continuous attractor neural network (CANN) and its potential roles in neural information processing. We find that the network with STD can generate both static and traveling bumps, and STD enhances the performance of the network in tracking external inputs. In particular, we find that STD endows the network with slow-decaying plateau behaviors, namely, the network being initially stimulated to an active state will decay to silence very slowly in the time scale of STD rather than that of neural signaling. We argue that this provides a mechanism for neural systems to hold short-term memory easily and shut off persistent activities naturally. 1 Introduction Networks of various types, formed by a large number of neurons through synapses, are the substrate of brain functions. The network structure is the key that determines the responsive behaviors of a network to external inputs, and hence the computations implemented by the neural system. Understanding the relationship between the structure of a neural network and the function it can achieve is at the core of using mathematical models for elucidating brain functions. In the conventional modeling of neuronal networks, it is often assumed that the connection weights between neurons, which model the efficacy of the activities of pre-synaptic neurons on modulating the states of post-synaptic neurons, are constants, or vary only in long-time scales when learning occurs. However, experimental data has consistently revealed that neuronal connection weights change in short time scales, varying from hundreds to thousands of milliseconds (see, e.g., [1]). This is called short-term plasticity (STP). A predominant type of STP is short-term depression (STD), which decreases the connection efficacy when a pre-synaptic neuron fires. The physiological process underlying STD is the depletion of available resources when signals are transmitted from a presynaptic neuron to the post-synaptic one. Is STD simply a by-product of the biophysical process of neural signaling? Experimental and theoretical studies have suggested that this is unlikely to be the case. Instead, STD can play very active roles in neural computation. For instance, it was found that STD can achieve gain control in regulating neural responses to external inputs, realizing Weber?s law [2, 3]. Another example is that STD enables a network to generate transient synchronized population firing, appealing for detecting subtle changes in the environment [4, 5]. The STD of a neuron is also thought to play a role in estimating the information of the pre-synaptic membrane potential from the spikes it receives [6]. From the computational point of view, the time scale of STD resides between fast neural signaling 1 (in the order of milliseconds) and slow learning (in the order of minutes or above), which is the time order of many important temporal operations occurring in our daily life, such as working memory. Thus, STD may serve as a substrate for neural systems to manipulate temporal information in the relevant time scales. In this study, we will further explore the potential role of STD in neural information processing, an issue of fundamental importance but has not been adequately investigated so far. We will use continuous attractor neural networks (CANNs) as working models. CANNs are a type of recurrent networks which hold a continuous family of localized active states [7]. Neutral stability is a key advantage of CANNs, which enables neural systems to update memory states or to track timevarying stimuli smoothly. CANNs have been successfully applied to describe the retaining of shortterm memory, and the encoding of continuous features, such as the orientation, the head direction and the spatial location of objects, in neural systems [8, 9, 10]. CANNs are also shown to provide a framework for implementing population decoding efficiently [11]. We analyze the dynamics of a CANN with STD included, and find that apart from the static bump states, the network can also hold moving bump solutions. This finding agrees with the results reported in the literature [12, 13]. In particular, we find that with STD, the network can have slowdecaying plateau states, that is, the network being stimulated to an active state by a transient input will decay to silence very slowly in the time order of STD rather than that of neural signaling. This is a very interesting property. It implies that STD can provide a mechanism for neural systems to generate short-term memory and shut off activities naturally. We also find that STD retains the neutral stability of the CANN, and enhances the tracking performance of the network to external inputs. 2 The Model Let us consider a one-dimensional continuous stimulus x encoded by an ensemble of neurons. For example, the stimulus may represent the moving direction, the orientation or a general continuous feature of objects extracted by the neural system. Let u(x, t) be the synaptic input at time t to the neurons whose preferred stimulus is x. The range of the possible values of the stimulus is ?L/2 < x ? L/2 and u(x, t) is periodic, i.e., u(x+L) = u(x). The dynamics is particularly convenient to analyze in the limit that the interaction range a is much less than the stimulus range L, so that we can effectively take x ? (??, ?). The dynamics of u(x, t) is determined by the external input Iext (x, t), the network input from other neurons, and its own relaxation. It is given by Z ? ?u(x, t) ?s = Iext (x, t) + ? dx? J(x, x? )p(x? , t)r(x? , t) ? u(x, t), (1) ?t ?? where ?s is the synaptical transmission delay, which is typically in the order of 2 to 5 ms. J(x, x? ) is the base neural interaction from x? to x. r(x, t) is the firing rate of neurons. It increases with the synaptic input, but saturates in the presence of a global activity-dependent R ?inhibition. A solvable model that captures these features is given by r(x, t) = u(x, t)2 /[1 + k? ?? dx? u(x? , t)2 ], where ? is the neural density, and k is a positive constant controlling the strength of global inhibition. The global inhibition can be generated by shunting inhibition [14]. The key character of CANNs is the translational invariance of their neural interactions. In our ? solvable model, ? we choose Gaussian interactions with a range a, namely, J(x, x ) = J0 exp[?(x ? ? 2 2 x ) /2a ]/(a 2?), where J0 is a constant. The STD coefficient p(x, t) in Eq. (1) takes into account the pre-synaptic STD. It has the maximum value of 1, and decreases with the firing rate of the neuron [15, 16]. Its dynamics is given by ?p(x, t) = 1 ? p(x, t) ? p(x, t)?d ?r(x, t), (2) ?d ?t where ?d is the time constant for synaptic depression, and the parameter ? controls the depression effect due to neural firing. The network dynamics is governed by two time scales. The time constants of STD is typically in the range of hundreds to thousands of milliseconds, much larger than that of neural signaling, i.e., ?d ? ?s . The interplay between the fast and slow dynamics causes the network to exhibit interesting dynamical behaviors. 2 0.5 3 0.4320 0.3780 2 0.4 0.3240 0.2700 0.2160 1 0.1620 0.1080 u(x) 0.05400 0 x 0.000 0.3 0.2 -1 -2 0.1 -3 0 5 10 t/ 15 20 0 s Figure 1: The neural response profile tracks the change of position of the external stimulus from z0 = 0 to 1.5 at t = 0. Parameters: a = 0.5, k = 0.95, ? = 0, ? = 0.5. -2 0 t/?s 2 Figure 2: The profile of u(x, t) at t/? = 0, 1, 2, ? ? ? , 10 during the tracking process in Fig. 1. 2.1 Dynamics of CANN without Dynamical Synapses It is instructive to first consider the network dynamics when no dynamical synapses are included. This is done by setting ? = 0 in Eq. (2), so that p(x, t) = 1 for all t. In this case, the network can support a continuous family of stationary states when the global inhibition is not too strong. Specifically, the steady state solution to Eq. (1) is     (x ? z)2 (x ? z)2 , r ? (x|z) = r exp ? , (3) u?(x|z) = u0 exp ? 0 4a2 2a2 ? ? where u0?= [1 + (1 ? k/kc )1/2 ]J0 /(4ak ?), r0 = [1 + (1 ? k/kc )1/2 ]/(2ak? 2?) and kc = ?J02 /(8a 2?). These stationary states are translationally invariant among themselves and have the Gaussian shape with a free parameter z representing the position of the Gaussian bumps. They exist for 0 < k < kc , kc is thus the critical inhibition strength. Fung et al [17] considered the perturbations of the Gaussian states. They found various distortion modes, each characterized by an eigenvalue representing its rate of evolution in time. A key property they found is that the translational mode has a zero eigenvalue, and all other distortion modes have negative eigenvalues for k < kc . This implies that the Gaussian bumps are able to track changes in the position of the external stimuli by continuously shifting the position of the bumps, with other distortion modes affecting the tracking process only in the transients. An example of the tracking process is shown in Figs. 1 and 2, when an external stimulus with a Gaussian profile is initially centered at z = 0, pinning the center of a Gaussian neuronal response at the same position. At time t = 0, the stimulus shifts its center from z = 0 to z = 1.5 abruptly. The bump moves towards the new stimulus position, and catches up with the stimulus change after a time duration. which is referred to as the reaction time. 3 Dynamics of CANN with Synaptic Depression For clarity, we will first summarize the main results obtained on the network dynamics due to STD, and then present the theoretical analysis in Sec. 4. 3.1 The Phase Diagram In the presence of STD, CANNs exhibit new interesting dynamical behaviors. Apart from the static bump state, the network also supports moving bump states. To construct a phase diagram mapping these behaviors, we first consider how the global inhibition k and the synaptic depression ? scale with other parameters. In the steady state solution of Eq. (1), u0 and ?J0 u20 should have the same dimension; so are 1?p(x, t) and ?d ?u0 in Eq. (2). Hence we introduce the dimensionless parameters k ? k/kc and ? ? ?d ?/(?2 J02 ). The phase diagram obtained by numerical solutions to the network dynamics is shown in Fig. 3. 3 0.06 Moving Silent Figure 3: Phase diagram of the network states. Symbols: numerical solutions. Dashed line: Eq. (10). Dotted line: Eq. (13). Solid line: Gaussian approxiP mation using 11th order perturbation of the STD coefficient. Point P: the working point for Figs. 4 and 7. Parameters: 1 ?d /?s = 50, a = 0.5/6, range of the network = [??, ?). 0.04 ? Metastatic or Moving 0.02 0 0 0.2 0.4 Static 0.8 0.6 k We first note that the synaptic depression and the global inhibition plays the same role in reducing the amplitude of the bump states. This can be seen from the steady state solution of u(x, t), which reads Z ?J(x ? x? )u(x? )2 R u(x) = dx? . (4) 1 + k? dx?? u(x?? )2 + ?d ?u(x? )2 The third term in the denominator of the integrand arises from STD, and plays the role of a local inhibition that is strongest where the neurons are most active. Hence we see that the silent state with u(x, t) = 0 is the only stable state when either k or ? is large. When STD is weak, the network behaves similarly with CANNs without STD, that is, the static bump state is present up to k near 1. However, when ? increases, a state with the bump spontaneously moving at a constant velocity comes into existence. Such moving states have been predicted in CANNs [12, 13], and can be associated with traveling wave behaviors widely observed in the neocortex [18]. At an intermediate range of ?, both the static and moving states coexist, and the final state of the network depends on the initial condition. When ? increases further, only the moving state is present. 3.2 The Plateau Behavior The network dynamics displays a very interesting behavior in the parameter regime when the static bump solution just loses its stability. In this regime, an initially activated network state decays very slowly to silence, in the time order of ?d . Hence, although the bump state eventually decays to the silent state, it goes through a plateau region of a slowly decaying amplitude, as shown in Fig. 4. 5 0.05 0.04 B 1-minxp(x,t) maxx ?J0u(x,t) 4 3 2 1 0 0 A B 0.03 0.02 0.01 A 100 t 200 300 0 0 100 200 300 t 400 500 Figure 4: Magnitudes of rescaled neuronal input ?J0 u(x, t) and synaptic depression 1 ? p(x, t) at (k, ?) = (0.95, 0.0085) (point P in Fig. 3) and for initial conditions of types A and B in Fig. 8. Symbols: numerical solutions. Lines: Gaussian approximation using Eqs. (8) and (9). Other parameters: ?d /?s = 50, a = 0.5 and x ? [??, ?). 3.3 Enhanced Tracking Performance The responses of CANNs with STD to an abrupt change of stimulus are illustrated in Fig. 5. Compared with networks without STD, we find that the bump shifts to the new position faster. The extent of improvement in the presence of STD is quantified in Fig. 6. However, when ? is too strong, the bump tends to overshoot the target before eventually approaching it. 4 2 0.9 1.5 z(t) 0.8 k = 0.5, ? = 0 k = 0.5, ? = 0.05 k = 0.5, ? = 0.2 0.5 0 0 v at z = 0.5z0 1 10 t 20 k = 0.3 k = 0.5 k = 0.7 0.7 0.6 0.5 0.4 30 0.3 0 Figure 5: The response of CANNs with STD to an abruptly changed stimulus from z0 = 0 to z0 = 1.5 at t = 0. Symbols: numerical solutions. Lines: Gaussian approximation using 11th order perturbation of the STD coefficent. Parameters: ?d /?s = 50, ? = 0.5, a = 0.5 and x ? [??, ?). 0.02 0.04 ? 0.06 0.08 0.1 Figure 6: Tracking speed of the bump at 0.5z0, where z0 is fixed to be 1.5 4 Analysis Despite the apparently complex behaviors of CANNs with STD, we will show in this section that a Gaussian approximation can reproduce the behaviors and facilitate the interpretation of the results. Details are explained in Supplementary Information. We observe that the profile of the bump remains effectively Gaussian in the presence of synaptic depression. On the other hand, there is a considerable distortion of the profile of the synaptic depression, when STD is strong. Yet, to the lowest order approximation, let us approximate the profile of the synaptic depression to be a Gaussian as well, which is valid when STD is weak, as shown in Fig. 7(a). Hence, for a ? L, we propose the following ansatz   (x ? z)2 u(x, t) = u0 (t) exp ? , (5) 4a2   (x ? z)2 p(x, t) = 1 ? p0 (t) exp ? . (6) 2a2 When these expressions are substituted into the dynamical equations (1) and (2), other functions f (x) of x appear. To maintain consistency with the Gaussian approximation, these functions will be approximated by their projections onto the Gaussian functions. In Eq. (1), we approximate Z  (x? ?z)2 (x?z)2 dx? ? ? 4a2 ? f (x) ? f (x )e e? 4a2 . (7) 2?a2   Similarly, in Eq. (2), we approximate f (x) by its projection onto exp ?(x ? z)2 /(2a2 ) . 4.1 The Solution of the Static Bumps Without loss of generality, we let z = 0. Substituting Eq. (5) and (6) into Eqs. (1) and (2), and letting u(t) ? ?J0 u0 (t), we get " # r du(t) u(t)2 4 1? (8) = ? p0 (t) ? u(t), ?s dt 7 2(1 + ku(t)2 /8) " # r dp0 (t) ?u(t)2 2 ?d 1? = p0 (t) ? p0 (t). (9) dt 3 1 + ku(t)2 /8 By considering the steady state solution of u and p0 and their stability against fluctuations of u and p0 , we find that stable solutions exist when " # p p0 (1 ? 4/7p0 )2 ?s p p ?? 1+ , (10) 4(1 ? 2/3p0 ) ?d (1 ? 2/3p0 ) 5 when p0 is the steady state solution of Eqs. (1) and (2). The boundary of this region is shown as a dashed line in Fig. 3. Unfortunately, this line is not easily observed in numerical solutions since the static bump is unstable against fluctuations that are asymmetric with respect to its central position. Although the bump is stable against symmetric fluctuations, asymmetric fluctuations can displace its position and eventually convert it to a moving bump. 4.2 The Solution of the Moving Bumps As shown in Fig. 7(b), the profile of a moving bump is characterized by a lag of the synaptic depression behind the moving bump. This is because neurons tend to be less active in locations of low values of p(x, t), causing the bump to move away from locations of strong synaptic depression. In turn, the region of synaptic depression tends to follow the bump. However, if the time scale of synaptic depression is large, the recovery of the synaptic depressed region is slowed down, and cannot catch up with the bump motion. Thus, the bump starts moving spontaneously. To incorporate asymmetry into the moving state, we propose the following ansatz:   (x ? vt)2 u(x, t) = u0 (t) exp ? , (11) 4a2      (x ? vt)2 (x ? vt)2 x ? vt p(x, t) = 1 ? p0 (t) exp ? . (12) + p (t) exp ? 1 2a2 2a2 a   Projecting the terms in Eq. (1) to the basis functions exp ?(x ? vt)2 /(4a2 )  and  exp ?(x ? vt)2 /(4a2 ) (x ? vt)/a, and those in Eq. (2) to exp ?(x ? vt)2 /(2a2 ) and exp ?(x ? vt)2 /(2a2 ) (x ? vt)/a, we obtain four equations for u, p0 , p1 and v?s /a. Real solutions exist only if ? ??1 s 2 2 ?u ?d ?d ? A? ? B + ? B ? C? , (13) ?s ?s 1 + ku2 /8 p p ? where A = 7 7/4, B = (7/4)[(5/2) 7/6 ? 1], and C = (343/36)(1 ? 6/7). As shown in Fig. 3, the boundary of this region effectively coincides with the numerical solution of the line separating the static and moving phases. Note that when ?d /?s increases, the static phase shrinks. This is because the recovery of the synaptic depressed region is slowed down, making it harder to catch up with changes in the bump motion. Stationary Moving 1.02 1.01 0.99 0.4 1 0.3 0.95 0.1 0.98 0.05 0.97 0.1 0 0.96 0 -2 0 x 2 1.05 0.2 0.9 -2 0 x 2 0.85 p(x,t) 0.15 u p (b) 0.5 1 u(x,t) u(x,t) 0.2 1.1 0.6 p(x,t) 0.25 u p (a) Figure 7: Neuronal input u(x, t) and the STD coefficient p(x, t) in (a) the static state at (k, ?) = (0.9, 0.005), and (b) the moving state at (k, ?) = (0.5, 0.015). Parameter: ?d /?s = 50. An alternative approach that arrives at Eq. (13) is to consider the instability of the static bump, which is obtained by setting v and p1 to zero in Eqs. (11) and (12). Considering the instability of the static bump against the asymmetric fluctuations in p1 and vt, we again arrive at Eq. (13). This shows that as soon as the moving bump comes into existence, the static bump becomes unstable. This also implies that in the entire region that the static and moving bumps coexist, the static bump is unstable to asymmetric fluctuations. It is stable (or more precisely, metastable) when it is static, but once it is pushed to one side, it will continue to move along that direction. We may call this behavior metastatic. As we shall see, this metastatic behavior is also the cause of the enhanced tracking performance. 4.3 The Plateau Behavior To illustrate the plateau behavior, we select a point in the marginally unstable regime of the silent phase, that is, in the vicinity of the static phase. As shown in Fig. 8, the nullclines of u and p0 6 (du/dt = 0 and dp0 /dt = 0 respectively) do not have any intersections as they do in the static phase where the bump state exists. Yet, they are still close enoughp to create a ? region with very slow 1/2 dynamics near the apex of the u-nullcline at (u, p0 ) = [(8/k) , 7/4(1 ? k)]. Then, in Fig. 8, we plot the trajectories of the dynamics starting from different initial conditions. For verification, we also solve the full equations (1) and (2), and plot a flow diagram with the axes being maxx u(x, t) and 1 ? minx p(x, t). The resultant flow diagram has a satisfactory agreement with Fig. 8. The most interesting family of trajectories is represented by B and C in Fig. 8. Due to the much faster dynamics of u, trajectories starting from a wide range of initial conditions converge rapidly, in a time of the order ?s , to a common trajectory in the close neighborhood of the u-nullcline. Along this common trajectory, u is effectively the steady state solution of Eq. (8) at the instantaneous value of p0 (t), which evolves with the much longer time scale of ?d . This gives rise to the plateau region of u which can survive for a duration of the order ?d . The plateau ends after the trajectory has passed the slow region near the apex of the u-nullcline. This dynamics is in clear contrast with trajectory D, in which the bump height decays to zero in a time of the order ?s . Trajectory A represents another family of trajectories having rather similar behaviors, although the lifetimes of their plateaus are not so long. These trajectories start from more depleted initial conditions, and hence do not have chances to get close to the u-nullcline. Nevertheless, they converge rapidly, in a time of order ?s , to the band u ? (8/k)1/2 , where the dynamics of u is slow. The trajectories then rely mainly on the dynamics of p0 to carry them out of this slow region, and hence plateaus of lifetimes of the order ?d are created. 0.06 0.06 100.0 0.05 80.00 0.05 70.00 A 0.04 60.00 0.04 40.00 p0 30.00 0.03 20.00 0.03 10.00 5.500 0.02 0.02 0.01 0 0 D 1 2 B C 0.01 Bumps can sustain here. 3 u 4 5 6 0.00 0.6 0.8 1.0 k Figure 8: Trajectories of network dynamics starting from various initial conditions at (k, ?) = (0.95, 0.0085) (point P in Fig. 3). Solid line: u-nullcline. Dashed line: p0 -nullcline. Symbols are data points spaced at time intervals of 2?s . Figure 9: Contours of plateau lifetimes in the space of k and ?. The lines are the two topmost phase boundaries in Fig. 3. In the initial condition, ? = 0.5. Following similar arguments, the plateau behavior also exists in the stable region of the static states. This happens when the initial condition of the network lies outside the basin of attraction of the static states, but it is still in the vicinity of the basin boundary. When one goes deeper into the silent phase, the region of slow dynamics between the u- and p0 nullclines broadens. Hence plateau lifetimes are longest near the phase boundary between the bump and silent states, and become shorter when one goes deeper into the silent phase. This is confirmed by the contours of plateau lifetimes in the phase diagram shown in Fig. 9 obtained by numerical solution. The initial condition is uniformly set by introducing an external stimulus I ext (x|z0 ) = ?u0 exp[?x2 /(4a2 )] to the right hand side of Eq. (1), where ? is the stimulus strength. After the network has reached a steady state, the stimulus is removed at t = 0, leaving the network to relax. 4.4 The Tracking Behavior ext To study =  the tracking  behavior, we add the external stimulus I (x|z0 ) 2 2 ?u0 exp ?(x ? z0 ) /(4a ) to the right hand side of Eq. (11), where z0 is the position of the stimulus abruptly changed at t = 0. With this additional term, we solve the modified version of Eqs. (11) and (12), and the solution reproduces the qualitative features due to the presence of synaptic depression, namely, the faster response at weak ?, and the overshooting at stronger ?. As remarked previously, this is due to the metastatic behavior of the bumps, which enhances their reaction to move from the static state when a small push is exerted. 7 However, when describing the overshooting of the tracking process, the quantitative agreement between the numerical solution and the ansatz in Eqs. (11) and (12) is not satisfactory. We have made improvement by developing a higher order perturbation analysis using basis functions of the quantum harmonic oscillator [17]. As shown in Fig. 5, the quantitative agreement is much more satisfactory. 5 Conclusions and Discussions In this work, we have investigated the impact of STD on the dynamics of a CANN, and found that the network can support both static and moving bumps. Static bumps exist only when the synaptic depression is sufficiently weak. A consequence of synaptic depression is that it places static bumps in the metastatic state, so that its response to changing stimuli is speeded up, enhancing its tracking performance. We conjecture that moving bump states may be associated with traveling wave behaviors widely observed in the neurocortex. A finding in our work with possibly very important biological implications is that STD endows the network with slow-decaying behaviors. When the network is initially stimulated to an active state by an external input, it will decay to silence very slowly after the input is removed. The duration of the plateau is of the time scale of STD rather than neural signaling, and it provides a way for the network to hold the stimulus information for up to hundreds of milliseconds, if the network operates in the parameter regime that the bumps are marginally unstable. This property is, on the other hand, extremely difficult to be implemented in attractor networks without STD. In a CANN without STD, an active state of the network decays to silence exponentially fast or persists forever, depending on the initial activity level of the network. Indeed, how to shut off the activity of a CANN has been a challenging issue that received wide attention in theoretical neuroscience, with solutions suggesting that a strong external input either in the form of inhibition or excitation must be applied (see, e.g., [19]). Here, we show that STD provides a mechanism for closing down network activities naturally and in the desirable duration. We have also analyzed the dynamics of CANNs with STD using a Gaussian approximation of the bump. It describes the phase diagram of the static and moving phases, the plateau behavior, and provides insights on the metastatic nature of the bumps and its relation with the enhanced tracking performance. In most cases, approximating 1 ? p(x, t) by a Gaussian profile is already sufficient to produce qualitatively satisfactory results. However, higher order perturbation analysis is required to yield more accurate descriptions of results such as the overshooting in the tracking process (Fig. 5). Besides STD, there are other forms of STP that may be relevant to realizing short-term memory. Mongillo et al. [20] have recently proposed a very interesting idea for achieving working memory in the prefrontal cortex by utilizing the effect of short-term facilitation (STF). Compared with STD, STF has the opposite effect in modifying the neuronal connection weights. The underlying biophysics of STF is the increased level of residual calcium due to neural firing, which increases the releasing probability of neural transmitters. Mongillo et al. [20] showed that STF provides a way for the network to encode the information of external inputs in the facilitated connection weights, and it has the advantage of not having to recruit persistent neural firing and hence is economically efficient. This STF-based memory mechanism is, however, not necessarily contradictory to the STD-based one we propose here. They may be present in different cortical areas for different computational purposes. STD and STF have been observed to have different effects in different cortical areas. One location is the sensory cortex where CANN models are often applicable. Here, the effects of STD tends to be stronger than that of STF. Different from the STF-based mechanism, our work suggests that the STD-based one exhibits the prolonged neural firing, which has been observed in some cortical areas. In terms of information transmission, prolonged neural firing is preferable in the early information pathways, so that the stimulus information can be conveyed to higher cortical areas through neuronal interactions. Hence, it seems that the brain may use a strategy of weighting the effects of STD and STF differentially for carrying out different computational tasks. It is our goal in future work to explore the joint impact of STD and STF on the dynamics of neuronal networks. This work is partially supported by the Research Grants Council of Hong Kong (grant nos. HKUST 603607 and 604008). 8 References [1] H. Markram, Y. Wang and M. Tsodyks, Proc. Natl. Acad. Sci. U.S.A., 95, 5323 (1998). [2] M. Tsodyks and H. Markram, Proc. Natl. Acad. Sci. U.S.A., 94, 719-723 (1997). [3] L. F. Abbott, J. A. Varela, K. Sen and S. B. Nelson, Science, 275, 220-224 (1997). [4] M. Tsodyks, A. Uziel and H. Markram, J. Neurosci., 20, 1-5 (2000). [5] A. Loebel and M. Tsodyks, J. Comput. Neurosci., 13, 111-124 (2002). [6] J.-P. Pfister, P. Dayan, and M. Lengyel, Advances in Neural Information Processing Systems 22, Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta (eds.), 1464 (2009). [7] S. Amari, Biological Cybernetics, 27, 77-87 (1977). [8] R. Ben-Yishai, R. Lev Bar-Or and H. Sompolinsky, Proc. Natl. Acad. Sci. U.S.A., 92, 38443848 (1995). [9] K.-C. Zhang, J. Neurosci., 16, 2112-2126 (1996). [10] A. Samsonovich, and B. L. McNaughton, J. Neurosci., 7, 5900-5920 (1997). [11] S. Deneve, P. E. Latham and A. Pouget, Nature Neuroscience, 2, 740-745 (1999). [12] L. C. York and M. C. W. van Rossum, J. Comput. Neurosci. 27, 607-620 (2009) [13] Z. P. Kilpatrick and P. C. Bressloff, Physica D 239, 547-560 (2010) [14] J. Hao, X. Wang, Y. Dan, M. Poo and X. Zhang, Proc. Natl. Acad. Sci. U.S.A., 106, 2190621911 (2009). [15] M. V. Tsodyks, K. Pawelzik and H. Markram, Neural Comput. 10, 821-835 (1998). [16] R. S. Zucker and W. G. Regehr, Annu. Rev. Physiol. 64, 355-405 (2002). [17] C. C. A. Fung, K. Y. M. Wong and S. Wu, Neural Comput. 22, 752-792 (2010) [18] J. Wu, X. Huang and C. Zhang, The Neuroscientist, 14, 487-502 (2008). [19] B. S. Gutkin, C. R. Laing, C. L. Colby, C. C. Chow and B. G. Ermentrout, J. Comput. Neurosci., 11, 121-134 (2001). [20] G. Mongillo, O. Barak and M. Tsodyks, Science, 319, 1543-1546 (2008). 9
3939 |@word kong:3 economically:1 version:1 stronger:2 seems:1 p0:20 solid:2 harder:1 colby:1 carry:1 initial:10 efficacy:2 reaction:2 hkust:1 si:1 yet:2 dx:5 ust:2 must:1 physiol:1 numerical:8 plasticity:1 shape:1 enables:2 displace:1 plot:2 update:1 overshooting:3 stationary:3 shut:3 realizing:2 short:8 core:1 provides:5 detecting:1 location:4 zhang:3 height:1 mathematical:1 along:2 become:1 persistent:2 qualitative:1 pathway:1 dan:1 introduce:1 indeed:1 behavior:22 themselves:1 p1:3 samsonovich:1 brain:3 nullcline:6 prolonged:2 pawelzik:1 considering:2 becomes:1 estimating:1 underlying:2 lowest:1 recruit:1 finding:2 temporal:2 quantitative:2 preferable:1 control:2 grant:2 appear:1 rossum:1 positive:1 before:1 persists:1 local:1 tsinghua:2 limit:1 tends:3 consequence:1 despite:1 encoding:1 ak:2 ext:2 acad:4 lev:1 firing:8 fluctuation:6 china:3 quantified:1 suggests:1 challenging:1 range:8 speeded:1 spontaneously:2 signaling:6 j0:6 area:4 maxx:2 thought:1 convenient:1 projection:2 pre:4 get:2 onto:2 cannot:1 coexist:2 close:3 dimensionless:1 instability:2 wong:2 conventional:1 center:2 poo:1 go:3 attention:1 starting:3 duration:4 williams:1 abrupt:1 recovery:2 pouget:1 insight:1 attraction:1 utilizing:1 facilitation:1 population:2 stability:4 mcnaughton:1 controlling:1 play:4 enhanced:3 target:1 substrate:2 agreement:3 velocity:1 approximated:1 particularly:1 std:54 asymmetric:4 observed:5 role:6 wang:3 capture:1 thousand:2 tsodyks:6 region:13 culotta:1 sompolinsky:1 decrease:2 rescaled:1 removed:2 topmost:1 environment:1 ermentrout:1 dynamic:25 overshoot:1 carrying:1 serve:1 basis:2 easily:2 joint:1 various:3 represented:1 u20:1 fast:3 describe:1 broadens:1 neighborhood:1 outside:1 whose:1 encoded:1 larger:1 metastatic:6 widely:2 distortion:4 supplementary:1 lag:1 solve:2 relax:1 amari:1 final:1 interplay:1 advantage:2 eigenvalue:3 biophysical:1 sen:1 propose:3 interaction:5 product:1 causing:1 relevant:2 rapidly:2 achieve:2 academy:1 description:1 differentially:1 transmission:2 asymmetry:1 produce:1 ben:1 object:2 illustrate:1 recurrent:1 ac:1 depending:1 received:1 eq:23 strong:5 implemented:2 predicted:1 implies:3 come:2 synchronized:1 direction:3 modifying:1 centered:1 transient:3 implementing:1 biological:2 alanfung:1 physica:1 hold:4 sufficiently:1 considered:1 exp:15 mapping:1 bump:48 substituting:1 vary:1 early:1 a2:16 purpose:1 laing:1 proc:4 applicable:1 council:1 modulating:1 agrees:1 create:1 successfully:1 gaussian:17 mation:1 modified:1 rather:4 nullclines:2 varying:1 timevarying:1 encode:1 ax:1 improvement:2 consistently:1 longest:1 transmitter:1 mainly:1 hk:2 contrast:1 dependent:1 dayan:1 unlikely:1 typically:2 entire:1 initially:4 chow:1 kc:7 relation:1 reproduce:1 synaptical:1 issue:2 stp:3 canns:13 orientation:2 translational:2 among:1 retaining:1 spatial:1 construct:1 once:1 having:2 exerted:1 represents:1 survive:1 future:1 stimulus:22 translationally:1 phase:16 fire:1 attractor:4 maintain:1 neuroscientist:1 regulating:1 elucidating:1 predominant:1 analyzed:1 arrives:1 activated:1 behind:1 natl:4 yishai:1 implication:1 accurate:1 daily:1 shorter:1 theoretical:3 instance:1 increased:1 modeling:1 retains:1 introducing:1 neutral:2 hundred:3 delay:1 too:2 reported:1 dp0:2 periodic:1 density:1 fundamental:1 off:3 decoding:1 ansatz:3 michael:1 continuously:1 again:1 central:1 choose:1 slowly:5 possibly:1 prefrontal:1 huang:1 external:13 account:1 potential:3 suggesting:1 sec:1 coefficient:3 depends:1 view:1 analyze:2 pinning:1 apparently:1 wave:2 decaying:3 start:2 reached:1 mongillo:3 formed:1 efficiently:1 ensemble:1 spaced:1 yield:1 weak:4 marginally:2 trajectory:12 confirmed:1 cybernetics:1 lengyel:1 plateau:16 synapsis:3 strongest:1 synaptic:26 ed:1 against:4 remarked:1 uziel:1 naturally:3 associated:2 resultant:1 static:27 gain:1 subtle:1 amplitude:2 higher:3 dt:4 follow:1 response:7 sustain:1 done:1 shrink:1 generality:1 lifetime:5 just:1 traveling:3 working:4 receives:1 hand:4 mode:4 facilitate:1 effect:6 regehr:1 adequately:1 hence:10 evolution:1 vicinity:2 read:1 symmetric:1 satisfactory:4 illustrated:1 during:1 steady:7 excitation:1 coincides:1 hong:3 m:1 latham:1 motion:2 weber:1 harmonic:1 instantaneous:1 recently:1 common:2 behaves:1 shanghai:1 exponentially:1 he:1 interpretation:1 consistency:1 similarly:2 depressed:2 closing:1 apex:2 moving:23 stable:5 zucker:1 longer:1 cortex:2 inhibition:10 gutkin:1 base:1 add:1 own:1 showed:1 apart:2 continue:1 life:1 vt:11 transmitted:1 seen:1 additional:1 r0:1 converge:2 signal:1 u0:9 dashed:3 full:1 desirable:1 alan:1 faster:3 characterized:2 long:2 post:2 manipulate:1 shunting:1 biophysics:1 impact:3 denominator:1 enhancing:1 represent:1 ion:1 affecting:1 interval:1 diagram:8 leaving:1 releasing:1 tend:1 flow:2 lafferty:1 call:1 near:4 presence:5 depleted:1 revealed:1 intermediate:1 bengio:1 approaching:1 opposite:1 silent:7 idea:1 cn:2 shift:2 expression:1 passed:1 abruptly:3 york:1 cause:2 depression:19 clear:1 neocortex:1 band:1 generate:3 exist:4 millisecond:4 dotted:1 neuroscience:3 track:3 shall:1 key:4 four:1 varela:1 nevertheless:1 achieving:1 clarity:1 changing:1 abbott:1 deneve:1 relaxation:1 convert:1 beijing:1 facilitated:1 arrive:1 family:4 place:1 wu:3 investigates:1 pushed:1 display:1 activity:7 strength:3 precisely:1 x2:1 integrand:1 speed:1 argument:1 extremely:1 conjecture:1 fung:3 metastable:1 developing:1 membrane:1 describes:1 character:1 appealing:1 evolves:1 making:1 happens:1 rev:1 explained:1 invariant:1 slowed:2 projecting:1 depletion:1 resource:1 equation:3 remains:1 previously:1 turn:1 eventually:3 mechanism:5 describing:1 letting:1 end:1 available:1 operation:1 observe:1 away:1 responsive:1 alternative:1 existence:2 chinese:1 approximating:1 move:4 already:1 occurs:1 spike:1 strategy:1 exhibit:4 enhances:3 minx:1 separating:1 sci:4 nelson:1 mail:1 argue:1 presynaptic:1 extent:1 unstable:5 besides:1 relationship:1 cann:9 difficult:1 unfortunately:1 hao:1 negative:1 rise:1 calcium:1 neuron:14 saturates:1 head:1 perturbation:5 namely:3 required:1 connection:6 able:1 suggested:1 bar:1 dynamical:5 regime:4 summarize:1 memory:8 shifting:1 critical:1 rely:1 endows:2 solvable:2 residual:1 representing:2 technology:1 created:1 shortterm:1 catch:3 understanding:1 literature:1 j02:2 stf:10 law:1 loss:1 interesting:6 localized:1 coefficent:1 conveyed:1 verification:1 basin:2 sufficient:1 changed:2 supported:1 free:1 soon:1 silence:5 side:3 deeper:2 barak:1 institute:1 wide:2 markram:4 bressloff:1 van:1 boundary:5 dimension:1 cortical:4 valid:1 resides:1 contour:2 quantum:1 sensory:1 made:1 qualitatively:1 iext:2 far:1 approximate:3 preferred:1 forever:1 global:6 active:8 reproduces:1 assumed:1 continuous:7 stimulated:3 ku:2 nature:2 ku2:1 schuurmans:1 du:2 investigated:2 complex:1 necessarily:1 substituted:1 main:1 neurosci:6 profile:8 neuronal:9 fig:22 referred:1 slow:9 position:10 comput:5 lie:1 governed:1 third:1 weighting:1 minute:1 z0:10 down:3 annu:1 phkywong:1 symbol:4 decay:7 physiological:1 exists:2 effectively:4 importance:1 magnitude:1 occurring:1 push:1 smoothly:1 intersection:1 simply:1 explore:2 tracking:14 partially:1 loses:1 determines:1 chance:1 extracted:1 goal:1 towards:1 oscillator:1 considerable:1 change:7 included:2 determined:1 specifically:1 reducing:1 uniformly:1 operates:1 contradictory:1 called:1 pfister:1 invariance:1 experimental:2 select:1 support:3 arises:1 incorporate:1 instructive:1
3,245
394
Chaitin-Kolmogorov Complexity and Generalization in Neural Networks Barak A. Pearlmutter School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Ronald Rosenfeld School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract We present a unified framework for a number of different ways of failing to generalize properly. During learning, sources of random information contaminate the network, effectively augmenting the training data with random information. The complexity of the function computed is therefore increased, and generalization is degraded. We analyze replicated networks, in which a number of identical networks are independently trained on the same data and their results averaged. We conclude that replication almost always results in a decrease in the expected complexity of the network, and that replication therefore increases expected generalization. Simulations confirming the effect are also presented. 1 BROKEN SYMMETRY CONSIDERED HARMFUL Consider a one-unit backpropagation network trained on exclusive or. Without hidden units, the problem is insoluble. One point where learning would stop is when all weights are zero and the output is always ~, resulting in an mean squared error of ~. But this is a saddle point; by placing the discrimination boundary properly, one point can be gotten correctly, two with errors of ~, and one with error of giving an MSE of i, as shown in figure 1. i, Networks are initialized with small random weights, or noise is injected during training to break symmetries of this sort. But in breaking this symmetry, something has been lost. Consider a kNN classifier, constructed from a kNN program and the training data. Anyone who has a copy of the kNN program can construct an identical classifier if they receive the training data. Thus, considering the classification 925 926 Pearlmutter and Rosenfeld as an abstract entity, we know its complexity cannot exceed that of the training data plus the overhead of the complexity of the program, which is fixed. But this is not necessarily the case for the backpropagation network we saw! Because of the introduction of randomly broken symmetries, the complexity of the classification itself can exceed that of the training data plus the learning procedure. Thus an identical classifier can no longer be constructed just from the program and the training data, because random factors have been introduced. For a striking example, consider presenting a "32 bit parity with 10,000 exceptions" stochastic learner with one million exemplars. The complexity of the resulting function will be high, since in order to specify it we must specify not only the regularities of training set, which we just did in a couple words, but also which of the 4 billion possibilities are among the 10,000 exceptions. Applying this idea to undertraining and overtraining, we see that there are two kinds of symmetries that can be broken. First, if not all the exemplars can be loaded, which of the outliers are not loaded can be arbitrary. Second, underconstrained networks that behave the same on the training set may behave differently on other inputs. Both phenomena can be present simultaneously. 2 A COMPLEXITY BOUND The expected value of the complexity of the function implemented by a network b trained on data d, where b is a potentially stochastic mapping, satisfies E(C(b(d))) ~ C(d) + C(b) + I(b(d)ld) where I(b(d)ld) is the negative of the entropy of the bias distribution of b trained on d, I(b(d)ld) -H(b(d)) log P(b(d) f) = =- L = f where f ranges over functions that the network could end up performing, with the network regarded as a black box. This in turn is bounded by the information contained in the random internal parameters, or by the entropy of the watershed structure; but these are both potentially unbounded. A number of techniques for improving generalization, when viewed in this light, work because they tighten this bound. ? Weight decay [2] and the statistical technique of ridge regression impose an extra constraint on the parameters, reducing their freedom to arbitrarily break symmetry when underconstrained. ? Cross validation attempts to stop training before too many symmetries have been broken. ? Efforts to find the perfect number of hidden units attempt to minimize the number of symmetries that must be broken . These techniques strike a balance between undertraining and overtraining. Since in any realistic domain both of these effects will be simultaneously present, it would seem advantageous to attack the problem at the root. One approach that has been Chaitin-Kolmogorov Complexity and Generalization in Neural Networks 3 : 0 @ 0 @ + t! + + + ? ? ? +:.+ + + + ......:.+++. ? +.... + + ~"* ? ........" " @ ?? ? + 112 t? .~+,:. t ?? + , .. , ., ?? ++...,+ + + ? + o ~ .....?_... ~_,;.;.~~_/:.:....t:~ .. _... ~ .. _ ?? 1+: @ + 0 ..... ??t+~:. +++++.;.... ?? ? ..: *.+ + + + ?? tot"'+.+ 0 ? ..: ?? ~ ~ Figure 1: The bifurcation of a perceptron trained on xor. ? it ? -3 + + + ++ ? ? + : +* ? + + + + ++ + ~----------~----------~ -3 o 3 Figure 2: The training set. Crosses are negative examples and diamonds are positive examples. rediscovered a number of times [1, 3], and systematically explored in its pure form by Lincoln and Skrzypek [4], is that of replicated networks. 3 REPLICATED NETWORKS One might think that the complexity of the average of a collection of networks would be the sum of the complexities of the components; but this need not be the case. Consider an ensemble network, in which an infinite number of networks are taught the training data simultaneously, each making its random decisions according to whatever distributions the training procedure calls for, and their output averaged. We have seen that the complexity of a single network can exceed that of its training data plus the training program. But this is not the case with ensemble networks, since the ensemble network output can be determined solely from the program and the training data, i.e. C(E(b(d))) ~ C(b)+C(d)+C("replicate") where C("replicate") is the complexity of the instruction to replicate and average (a small constant). A simple way to approximate the ensemble machine is to train a number of networks simultaneously and average the results. As the number of networks is increased, the composite model approaches the ensemble network, which cannot have higher complexity than the training data plus the program plus the instruction to replicate. Note that even if one accidentally stumbles across the perfect architecture and 927 928 Pearlmutter and Rosenfeld training regime, resulting in a net that always learns the training set perfectly but with no leftover capacity, and which generalizes as well as anything could, then making a replicated network can't hurt, since all the component networks would do exactly the same thing anyway. A number of researchers seem to have inadvertently exploited this fact. For instance, Hampshire et al. [1] train a number of networks on a speech task, where the networks differed in choice of objective function. The networks' outputs were averaged to form the answer used in the recognition phase, and the generalization performance of the composite network was significantly higher than that of any of its component networks. Replicated implementations programmed from identical specifications is a common technique in software engineering of highly reliable systems. 4 THE ISSUE OF INDUCTIVE BIAS The representational power of an ensemble is greater that that of a single network. By the usual logic, one would expect the ensemble to have worse generalization, since its inductive bias is weaker. Counterintuitively, this is not the case. For instance, the VC dimension of an ensemble of perceptrons is infinite, because it can implement an arbitrary three layer network, using replication to implement weights. This is much greater than the finite VC dimension of a single perceptron within the ensemble, but our analysis predicts better generalization for the ensemble than for a single stochastic perceptron when the bounds are tight, that is, when H(b(d)) ~ C("replicate"). (1) This leads to the conclusion that just knowing the inductive bias of a learner is not enough information to make strong conclusions about its expected generalization. Thus, distribution free results based purely on the inductive bias, such as VC dimension based PAC learning theory [5], may sometimes be unduly pessimistic. As to replicated networks, we have seen that they can not help but improve generalization when (1) holds. Thus, if one is training the same network over and over, perhaps with slightly different training regimes, and getting worse generalization than was hoped for, but on different cases each time, then one can improve generalization in a seemingly principled manner by putting all the trained networks in a box and calling it a finite sample of the ensemble network (and perhaps buying a bigger computer to run it on). 5 EMPIRICAL SUPPORT We conducted the following experiment: 17 standard backpropagation networks (Actually 20, but 3 were lost to a disk failure) were trained on a binary classification task. The nets all had identical architectures (2-20-1) but different initial weights, chosen uniformly from the interval [-1, 1]. The same training set was used to train all the networks. The fl:nctions implemented by each of the networks were then calculated in detail, and the performance of individual networks compared to that of their ensemble. The classification task was a stochastic 2D linear discriminator. Each point was obtained from a Gaussian centered at (0.0) with stdev 1. A classification of 1 was Chaitin-Kolmogorov Complexity and Generalization in Neural Networks I ...? J , , ? .. t h,{??> I ? I . ~ / .' ~ l ?~ Figure 3: The functions implemented by the 17 trained networks, and by their average (bottom right). Both the x and y axes run from -3 to 3, and grey levels are used to represent intermediate values in the interval [0,1]. 929 930 Pearlmutter and Rosenfeld Table 1: Mean squared error and number of mislabeled exemplars for each network on the training set of 200. net 12 9 16 5 7 10 13 17 19 6 15 18 8 11 20 14 4 mean ensemble nohidden MSE 0.0150837 0.0200039 0.0200026 0.0250207 0.0250213 0.0228319 0.0250156 0.0250018 0.0175466 0.0300099 0.0300075 0.0300060 0.0350609 0.0350006 0.0400013 0.0305254 0.0408391 0.027469 ? 0.007226 0.016286 0.060314 errors 3 4 4 5 5 5 5 5 5 6 6 6 7 7 8 9 13 6.058824 4 31 *** **** **** ***** ***** ***** ***** ***** ***** ****** ****** ****** ******* ******* ******** ********* ************* ? 2.261457 **"'''' assigned to points with z ~ 0, and 0 to points with z < 0, but reversed with an independent probability of 0.1. The final position of each point was then determined by adding a zero mean Gaussian with stdev .25. 200 points were so generated for the training set (shown in figure 2) and another 1000 points for the test set. Looking at figure 3, each net appears to correctly classify as many of the inputs as possible, within the bounds imposed on it by its inductive bias. Each function implemented by such a net is roughly equivalent to a linear combination of 20 independent linear discriminators. It is therefore clear why each map consists of regions delineated by up to 20 straight lines. Since the initial conditions were different for each net, so were the resultant regions. All networks misclassified some of the exemplars (see table 1), but the missclassifications were different for each network, illustrating symmetry breaking due to an overconstraining data set. Note that the ensemble's performance on the training set is comparable to that of the best of the trained networks, while its performance on the test set is far superior. The MSE error of the ensemble is much much better than the bound obtained from Jensen's inequality, the average MSE. In fact, the ensemble network gets a lower MSE than all but one individual network on the training sets, and a much lower MSE than any individual network on the test set; and it generalizes much better than any of the individual networks by a misclassification count metric. Chaitin-Kolmogorov Complexity and Generalization in Neural Networks Table 2: Mean squared error and number of mislabeled samples for each network on the test set of 1000. The performance of a theoretically perfect classifier (sign x) on the test set is 170 misclassifications, which is about what the network without hidden units gets. net 16 9 4 5 11 15 6 19 7 8 12 17 18 20 13 14 10 mean ensemble nohidden MSE 0.201 0.207 0.206 0.209 0.208 0.207 0.212 0.213 0.214 0.214 0.212 0.219 0.220 0.223 0.223 0.227 0.226 0.214 ? 0.007 0.160 0.0715 errors 205 213 215 216 216 216 219 220 222 224 225 225 227 229 231 237 254 223 ? 10.7 200 169 Table 3: Histogram of the networks' performance by number of misclassified training exemplars. I error count I networks I 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 * ** ****** *** ** * * * References [1] J. Hampshire and A. Waibel. A novel objective function for improved phoneme recognition using time delay neural networks. Technical Report CMU-CS-89118, Carnegie Mellon University School of Computer Science, March 1989. [2] Geoffrey E. Hinton, Terrence J. Sejnowski, and David H. Ackley. Boltzmann Machines: Constraint satisfaction networks that learn. Technical Report CMUCS-84-119, Carnegie-Mellon University, May 1984. [3] Nathan Intrator. A neural network for feature extraction. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 719-726, San Mateo, CA, 1990. Morgan Kaufmann. [4] Willian P. Lincoln and Josef Skrzypek. Synergy of clustering multiple back propagation networks. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2: pages 650-657, San Mateo, CA, 1990. Morgan Kaufmann. [5] L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134-1142, 19~4. 931
394 |@word illustrating:1 advantageous:1 replicate:5 disk:1 instruction:2 grey:1 simulation:1 ld:3 initial:2 must:2 tot:1 ronald:1 realistic:1 confirming:1 discrimination:1 attack:1 unbounded:1 constructed:2 replication:3 consists:1 overhead:1 manner:1 theoretically:1 expected:4 roughly:1 buying:1 considering:1 bounded:1 what:1 kind:1 unified:1 contaminate:1 exactly:1 classifier:4 whatever:1 unit:4 before:1 positive:1 engineering:1 solely:1 black:1 plus:5 might:1 mateo:2 programmed:1 range:1 averaged:3 lost:2 implement:2 backpropagation:3 procedure:2 empirical:1 significantly:1 composite:2 word:1 get:2 cannot:2 applying:1 equivalent:1 imposed:1 map:1 independently:1 pure:1 regarded:1 anyway:1 hurt:1 pa:2 recognition:2 predicts:1 bottom:1 ackley:1 region:2 decrease:1 principled:1 broken:5 complexity:17 trained:9 tight:1 purely:1 learner:2 mislabeled:2 differently:1 kolmogorov:4 stdev:2 train:3 sejnowski:1 knn:3 think:1 rosenfeld:4 itself:1 final:1 seemingly:1 net:7 nctions:1 chaitin:4 lincoln:2 representational:1 getting:1 billion:1 regularity:1 perfect:3 help:1 augmenting:1 exemplar:5 school:3 strong:1 implemented:4 c:1 gotten:1 stochastic:4 vc:3 centered:1 generalization:14 pessimistic:1 hold:1 considered:1 mapping:1 failing:1 counterintuitively:1 saw:1 leftover:1 always:3 gaussian:2 ax:1 properly:2 hidden:3 misclassified:2 josef:1 issue:1 classification:5 among:1 bifurcation:1 construct:1 extraction:1 identical:5 placing:1 report:2 randomly:1 simultaneously:4 individual:4 phase:1 attempt:2 freedom:1 possibility:1 rediscovered:1 highly:1 light:1 watershed:1 harmful:1 initialized:1 increased:2 instance:2 classify:1 delay:1 conducted:1 too:1 answer:1 terrence:1 squared:3 worse:2 break:2 root:1 analyze:1 sort:1 minimize:1 degraded:1 xor:1 loaded:2 who:1 phoneme:1 ensemble:17 kaufmann:2 generalize:1 researcher:1 straight:1 overtraining:2 touretzky:2 failure:1 resultant:1 couple:1 stop:2 actually:1 back:1 appears:1 higher:2 specify:2 improved:1 box:2 just:3 propagation:1 perhaps:2 effect:2 undertraining:2 inductive:5 assigned:1 during:2 anything:1 presenting:1 ridge:1 pearlmutter:4 novel:1 common:1 superior:1 million:1 mellon:4 insoluble:1 had:1 specification:1 longer:1 something:1 inequality:1 binary:1 arbitrarily:1 exploited:1 seen:2 morgan:2 greater:2 impose:1 strike:1 multiple:1 technical:2 cross:2 bigger:1 regression:1 metric:1 cmu:1 histogram:1 sometimes:1 represent:1 receive:1 interval:2 source:1 extra:1 thing:1 seem:2 call:1 exceed:3 intermediate:1 enough:1 misclassifications:1 architecture:2 perfectly:1 idea:1 knowing:1 effort:1 speech:1 clear:1 skrzypek:2 sign:1 correctly:2 carnegie:4 taught:1 putting:1 sum:1 run:2 injected:1 striking:1 almost:1 decision:1 comparable:1 bit:1 bound:5 layer:1 fl:1 constraint:2 software:1 calling:1 nathan:1 anyone:1 performing:1 according:1 waibel:1 combination:1 march:1 across:1 slightly:1 delineated:1 making:2 outlier:1 turn:1 count:2 know:1 end:1 generalizes:2 intrator:1 clustering:1 giving:1 objective:2 exclusive:1 usual:1 reversed:1 entity:1 capacity:1 balance:1 potentially:2 negative:2 implementation:1 boltzmann:1 diamond:1 finite:2 behave:2 hinton:1 looking:1 communication:1 arbitrary:2 introduced:1 david:1 discriminator:2 unduly:1 regime:2 program:7 reliable:1 power:1 misclassification:1 satisfaction:1 improve:2 stumble:1 expect:1 geoffrey:1 validation:1 editor:2 systematically:1 parity:1 copy:1 free:1 cmucs:1 accidentally:1 bias:6 weaker:1 perceptron:3 barak:1 boundary:1 dimension:3 calculated:1 collection:1 replicated:6 san:2 far:1 tighten:1 approximate:1 logic:1 synergy:1 pittsburgh:2 conclude:1 why:1 table:4 learn:1 ca:2 symmetry:9 improving:1 mse:7 necessarily:1 domain:1 did:1 noise:1 differed:1 position:1 breaking:2 learns:1 pac:1 jensen:1 learnable:1 explored:1 decay:1 underconstrained:2 adding:1 effectively:1 valiant:1 hoped:1 entropy:2 saddle:1 contained:1 satisfies:1 acm:1 viewed:1 infinite:2 determined:2 reducing:1 uniformly:1 hampshire:2 inadvertently:1 perceptrons:1 exception:2 internal:1 support:1 phenomenon:1
3,246
3,940
Gaussian sampling by local perturbations George Papandreou Department of Statistics University of California, Los Angeles [email protected] Alan L. Yuille Depts. of Statistics, Computer Science & Psychology University of California, Los Angeles [email protected] Abstract We present a technique for exact simulation of Gaussian Markov random fields (GMRFs), which can be interpreted as locally injecting noise to each Gaussian factor independently, followed by computing the mean/mode of the perturbed GMRF. Coupled with standard iterative techniques for the solution of symmetric positive definite systems, this yields a very efficient sampling algorithm with essentially linear complexity in terms of speed and memory requirements, well suited to extremely large scale probabilistic models. Apart from synthesizing data under a Gaussian model, the proposed technique directly leads to an efficient unbiased estimator of marginal variances. Beyond Gaussian models, the proposed algorithm is also very useful for handling highly non-Gaussian continuously-valued MRFs such as those arising in statistical image modeling or in the first layer of deep belief networks describing real-valued data, where the non-quadratic potentials coupling different sites can be represented as finite or infinite mixtures of Gaussians with the help of local or distributed latent mixture assignment variables. The Bayesian treatment of such models most naturally involves a block Gibbs sampler which alternately draws samples of the conditionally independent latent mixture assignments and the conditionally multivariate Gaussian continuous vector and we show that it can directly benefit from the proposed methods. 1 Introduction Using Markov random fields (MRFs) one can capture global statistical properties in large scale probabilistic networks while only explicitly modeling the interactions of neighboring sites. First introduced in statistical physics, MRFs and related models such as Boltzmann machines have proved particularly successful in computer vision and machile learning tasks such as image segmentation, signal recovery, texture modeling, classification, and unsupervised learning [1, 3, 5]. Drawing random samples from MRFs and juxtaposing them with real data allows one to directly assess the model quality. Sampling of MRFs also plays an important role within algorithms for model parameter fitting [7], signal estimation, and in image analysis for texture synthesis or inpainting [16, 19, 37]. The simplest but typically very slow way to draw random samples from MRFs is through single-site Gibbs sampling, a Markov chain Monte-Carlo (MCMC) algorithm in which one visits each node in the network and stochastically updates its state given the states of its neighbors [5]. Gaussian Markov random fields (GMRFs) are an important MRF class describing continuous variables linked by quadratic potentials [3,22,29,33] ? see Sec. 2. They are very useful both for modeling inherently Gaussian data and as building blocks for constructing more complex models. In this paper we study a technique which allows drawing exact samples from a GMRF in a single shot by first perturbing it and then computing the least energy configuration of the perturbed model. The perturbation involved amounts to independently injecting noise to each of the Gaussian factors/potentials in a fully distributed manner, as discussed in detail in Sec. 3. This reduction of sampling to quadratic energy minimization allows us to employ as black-box GMRF simulator any existing algorithm for MAP computation which is effective for a particular Gaussian graphical model. 1 The reliability of the most likely solution in a Gaussian model is characterized by the marginal variances. Marginal variances also arise in computations within non-linear sparse Bayesian learning and compressed sensing models [11, 26, 32]. However, their computation can be very challenging and a host of sophisticated techniques have been developed for this purpose, which often only apply to restricted classes of models [12, 24, 25, 28]. Being able to efficiently sample from a GMRF makes it practical to employ the generic sample-based estimator for computing Gaussian variances, as discussed in Sec. 4. This estimator, whose accuracy is independent of the problem size, is particularly attractive if only relatively rough variance estimates suffice, as is often the case in practice. Gaussian models have proven inadequate for image modeling as they fail to capture important aspects of natural image statistics such as the heavy tails in marginal histograms of linear filter responses. Nevertheless, much richer statistical image tools can be built if we also incorporate into our models latent variables or allow nonlinear interactions between multiple Gaussian fields and thus the GMRF sampling technique we describe here is very useful within this wider setting [10, 16, 19, 34]. In Sec. 5 we discuss the integration of our GMRF sampling algorithm in a block-Gibbs sampling context, where the conditionally Gaussian continuous variables and the conditionally independent latent variables are sampled alternately. The most straightforward way to capture the heavy tailed histograms of natural images is to model each filter response with a Gaussian mixture expert, thus using a single discrete assignment variable at each factor [16, 23]. However, our efficient GMRF algorithm can also be used in conjunction with Gaussian scale mixture (GSM) models for which the latent scale variable is continuous [2]; we demonstrate this in the context of Bayesian signal restoration by sampling from the posterior distribution under a total variation (TV) prior, employing the GSM characterization of the Laplacian density. Further, our sampling technique also applies when the latent variables are distributed, with each hidden variable affecting multiple experts. An interesting case we examine is the recently proposed factored Gaussian restricted Boltzmann machine (GRBM) of [18], which takes into account residual correlations among visible units by modeling them as a multivariate GMRF, conditional on the distributed state of an adjacent layer of discrete hidden units. We show that we can effectively replace the hybrid Monte Carlo sampler used by [18] with a block-Gibbs sampler in which the visible conditionally Gaussian units are sampled collectively by local perturbations, potentially allowing extension of the current patch-based model to a full-image factored GRBM, as has been recently done for the fields of independent experts model [19, 23]. Our GMRF sampling algorithm relies on a property of Gaussian densities (see Sec. 3) which, in a somewhat different form, has appeared before in the statistics literature [21, 22]. However, [21, 22] emphasize direct matrix factorization methods for solving the linear system arising in computing the Gaussian mean, which cannot handle the large models we consider here and do not discuss models with hidden variables. Variations of the sampling technique we study here have been also used in the image modeling work of [16] and very recently of [23]. However the sampling technique in these papers is used as a tool and not studied by itself. Apart from highlighting the power and versatility of the efficient GMRF sampling algorithm and drawing the machine learning community?s attention to it, our main novel contributions in this paper are: (1) Our interpretation of the Gaussian sampling algorithm as local factor perturbation followed by mode computation, which highlights its distributed nature and implies that any Gaussian mean computation routine can be equally effectively employed for GMRF sampling; (2) the application of the efficient sampling algorithm in rapid sampling and variance estimation of very large Gaussian models; and (3) the demonstration that, in the presence of hidden variables, it can be effectively integrated in a block-Gibbs sampler not only in discrete but also in continuous GSM models and in conjunction not only with local but also with distributed latent assignment representations. 2 2.1 Gaussian graphical models The linear Gaussian model We are working in the context of linear Gaussian models [20], in which a hidden vector x ? RN is assumed to follow a prior distribution P (x) and noisy linear measurements y ? RM of it are drawn with likelihood P (y|x). Specifically:  P (x) ? N (Gx; ?p , ?p ) ? exp ? 21 xT Jx x + kTx x   (1) P (y|x) = N (y; Hx + c, ?n ) ? exp ? 12 xT Jy|x x + kTy|x x ? 21 yT ??1 n y 2  where N (x; ?, ?) = |2??|?1/2 exp ? 21 (x ? ?)T ??1 (x ? ?) denotes the multivariate Gaussian density on x with mean ? and covariance ?. It is convenient to express the prior and likelihood Gaussian densities on x in Eq. (1) in information form; the respective parameters are T ?1 Jx = GT ??1 p G , kx = G ?p ?p T ?1 Jy|x = HT ??1 n H , ky|x = H ?n (y ? c) . (2)  We recall that the information form of the Gaussian density NI (x; k, J) ? exp ? 21 xT Jx + kT x employs the precision matrix J and the potential vector k [13]. If J is invertible, then the standard and information representations are equivalent, with ? = J?1 k and ? = J?1 , but the information form with J symmetric positive semidefinite is also convenient for describing degenerate Gaussian densities. Further, the precision matrix directly reveals dependencies between subsets of variables in the network: xi and xj are conditionally independent, given the values of the remaining components of x, iff Ji,j = 0, while, in general, ?i,j 6= 0; this implies that J is typically much sparser than ? for GMRF models, as further discussed in Sec. 2.2. and By Bayes? rule the posterior distribution of x given y is the product of the prior and likelihood terms and also has Gaussian density P (x|y) = N (x; ?, ?) , ?=J ?1 G T ??1 p ?p with T  T ?1 + H ??1 and ??1 = J = GT ??1 n (y ? c) p G + H ?n H . (3) We assume J = Jx + Jy|x to be invertible, although we allow for singular Jx and/or Jy|x ; in other words, the prior and likelihood jointly define a normalizable Gaussian density on x, although each of them on its own may leave a subspace of x unconstrained. 2.2 Gaussian Markov random fields T The K rows of G = [g1T ; . . . ; gK ] and the M rows of H = [hT1 ; . . . ; hTM ] can be seen as two sets of length-N linear filters. The respective filter responses Gx and Hx determine the prior and likelihood models of Eq. (1). We define the filter set F = [f1T ; . . . ; fLT ], L = K+M , as the union of {gk } and {hm } and further assume that any two filter responses are conditionally independent given x or, equivalently, that the covariance matrices in Eq. (1) are diagonal, ?p = diag(?p,1 , . . . , ?p,K ) and ?n = diag(?n,1 , . . . , ?n,M ). Also let ?p = (?p,1 ; . . . ; ?p,K ), y = (y1 ; . . . ; yM ), and c = (c1 ; . . . ; cM ). Then the posterior factorizes as a product of L Gaussian experts YL YL  P (x|y) ? exp ? 21 xT Jl x + kTl x ? N (flT x; ?l , ?l ) , (4) l=1 l=1 where the variances are ?l = ?p,l , l = 1 . . . K, for the factors that come from the prior term and ?l = ?n,l?K , l = K+1 . . . K+M , for those that come from the likelihood term; the corresponding means are ?l = ?p,l and ?l = yl?K ? cl?K , respectively. Comparing with Eq. (3), we see that the PL PL posterior Gaussian information parameters split additively as J = l=1 Jl and k = l=1 kl . The individual Gaussian factors have potential vectors kl = fl ??1 l ?l and rank-one precision matrices ?1 T Jl = fl ?l fl . Since J is invertible, L ? N . We see that there is a one-to-one correspondence between factors and filters; moreover, the (i, j) entry of Jl is non-zero iff both i and j entries of fl are non-zero. If the filter has Tl non-zero elements, then the corresponding Gaussian factor will couple the Tl variables in the clique x[l] . The resulting GMRF is depicted in a factor graph form in Fig. 1(a). It is straightforward to jointly model conditionally dependent filter responses by letting ?p or ?n have block diagonal structure, yielding multivariate Gaussian factors in Eq. (4). 2.3 Inference: Efficiently computing the posterior mean Conceptually, the Gaussian posterior distribution is fully characterized by the posterior mean ? and covariance matrix ?, which are given in closed form in Eq. (3): ? is the solution of a set of linear equations whose system matrix is the N ?N precision matrix J, while ? = J?1 . However, naively computing these quantities can be prohibitively expensive when working with high-dimensional models, requiring O(N 3 ) computation and O(N 2 ) space. For example, a typical 1 MP image model involves N = 106 variables; the corresponding symmetric covariance matrix ? is generally dense and occupies as much space as about 5?105 equally-sized images. Thankfully, for the GMRF models mostly used in practice, there exist powerful inference algorithms which avoid explicitly inverting the system matrix J. In certain special cases direct methods 3 f1 f2 f3 f4 fL ??1 1 ??1 x x1 x2 x3 xN ?1 b b b b b ??1 B ??B (a) b b ?B Jx (b) Figure 1: (a) The factor graph for the posterior GMRF contains the union f1:L of prior and likelihood factors/filters. An edge between a filter and a site means that the corresponding coefficient is non-zero. The variables connected to each factor comprise a clique of the GMRF. (b) Filterbank implementation of matrix-vector multiplication Jx arising in CG (?? is the spatial mirror of ?). are applicable for computing the mode, marginal variances, and samples from the posterior. For example, spatially homogeneous GMRFs give rise to a block-circulant precision matrix and exact computations can be carried out in O(N log N ) complexity with DFT-based techniques [10]. Exact inference can also be carried out in chain or tree-structured GMRFs using O(N ) Kalman filter equations which correspond to belief propagation (BP) updates recursively in time or scale [36]. A related direct approach which in the context of GMRFs has been studied in detail by [21, 22] relies on the Cholesky factorization of the precision matrix by efficient sparse matrix techniques, which typically re-order the variables in x so as to minimize the bandwidth W of J. The resulting algorithm has O(W 2 N ) speed and O(W N ) space complexity, which is still quite expensive for very large scale 2-D lattice image models, since the bandwidth W increases linearly with the spatial extent of the image and the support of the filters. More generally, for large scale and arbitrarily structured GMRFs one needs to resort to iterative techniques such as conjugate gradients, multigrid, or loopy BP in order to approximately solve the linear system in Eq. (3) and recover the most likely solution ?. Conjugate gradients (CG) [6] are generally applicable in our setup since the system matrix is positive definite. Each CG iteration involves a single matrix-vector multiplication Jx. By Sec. 2.2, this essentially amounts to computing PL the filter responses zl = flT x and the backprojection l=1 ??1 l zl fl , which respectively involves sending messages from the variables to the factors and back in the diagram of Fig. 1(a). The GMRFs arising in image modeling are typically defined on the image responses to a bank of linear filters {?? }, ? = 1 . . . , B; the spatial translation of each filter kernel ?? induces a subset of factors. In this context, the matrix-vector multiplication Jx in CG corresponds to convolutions and element-wise multiplications, as shown in the filterbank diagram of Fig. 1(b). The time complexity per iteration is thus low, typically O(N ) or O(N log N ), provided that the filter kernels ?? have small spatial support or correspond to wavelet or Fourier atoms for which fast discrete transforms exist, while computations can also be carried out in the GPU. The memory overhead is also minimal, O(N ), as CG employs only 3 or 4 auxiliary length-N vectors. The convergence rate of CG is largely problemdependent, but in many cases a relatively small number of iterations suffice to bring us close enough to the solution, especially if an effective preconditioner is used [6]. Multigrid algorithms also apply in certain of the GMRF models we consider, especially those related to physics-based variational energy and PDE formulations [29, 31]. When multigrid applies, as in the example of Sec. 3, it recovers the solution after a fixed number of iterations (independent of the problem size) and has optimal O(N ) time and space complexity. Loopy BP is a powerful distributed iterative method for computing ? which is guaranteed to converge for certain GMRF classes [13, 33]. 3 Gaussian sampling by independent factor perturbations Unlike direct methods, the iterative techniques discussed in Sec. 2.3 have been typically restricted to computing the posterior mode ? and considered less suited to posterior sampling or variance computation (but see Sec. 4). However, as the following result shows, exact sampling from a linear Gaussian model can be reduced to computing the mode of a Gaussian model with identical precision ? and thus the powerful iterative methods for matrix J but randomly perturbed potential vector k, recovering the mean can be used unmodified for sampling in large scale GMRFs. Specifically: Algorithm. A sample xs from the posterior distribution P (x|y) = N (x; ?, ?) of Eq. (3) can be ? p ? N (?p , ?p ). drawn using the following procedure: (1) Perturb the prior mean filter responses ? 4 ? ? N (y, ?n ). (3) Use the procedure for computing the posterior (2) Perturb the measurements y mode keeping the same system matrix J, only  replacing ?p and y with their perturbed versions: T ?1 ? ? + H ? (? y ? c) . xs = J?1 GT ??1 p n p Indeed, xs is a Gaussian random vector, as linear combination of Gaussians, and has the desired mean E{xs } = ? and covariance E{(xs ? ?)(xs ? ?)T } = J?1 = ?, as can readily be verified. Clearly, solving the corresponding linear system approximately will only yield an approximate sample. The reduction above implies that posterior sampling under the linear Gaussian model is computationally as hard as mode computation, provided that the structure of ?p and ?n allows efficient sampling from the corresponding distributions, using, e.g., the direct methods of Sec. 2.3. This algorithm is central to our paper; variations of it have appeared previously [16, 22, 23]. The sampling algorithm takes a particularly simple and intuitive form for the GMRFs discussed in Sec. 2.2. In this case ?p and ?n are diagonal and thus for sampling we perturb independently the factor means ? ?l ? N (?l , ?l ), l = 1 . . . L, followed by finding the mode of the so perturbed GMRF in Eq. (4). The perturbation can be equivalently seen in the information parameterization as injecting ? l = kl + fl ??1/2 ?l , with ?l ? N (0, 1), a simple local Gaussian noise to each potential vector by k l operation carried out independently at each factor of the diagram in Fig. 1(a). To demonstrate the power of this algorithm, we show in Fig. 2 an image inpainting example in which we fill in the occluded parts of an 498?495 image under a 2-D thin-membrane prior GMRF model [12,29,31], in which the Gaussian factors are induced by the first-order spatial derivative filters ?1 = [ ?1 1 ] and ?2 = [ ?1 1 ]T . The shared variance parameter ?l for the experts has been matched to the variance of the image derivative histogram. The presence of randomly placed measurements makes the problem non-stationary and thus Fourier domain techniques are not applicable. Finding the posterior mean of this model amounts to solving a quadratic energy minimization problem in which the non-occluded pixels are clamped to their observed values and corresponds to a Laplace PDE problem with non-homogeneous regularization, which can be tackled very efficiently with multigrid techniques [31]. To transform this efficient MAP computation technique into a powerful sampling algorithm for the thin-membrane GMRF, it suffices to inject noise to the factors, only perturbing the linear system?s right hand side. Using a multigrid solver originally developed for solving PDE problems, we can draw about 4 posterior samples per second from the 2-D thin-membrane model of Fig. 2, which is particularly impressive given its size; the multilevel Gibbs sampling technique of [30] is the only other algorithm that could potentially achieve such speed in a similar setup, yet it cannot produce exact single-shot samples as our algorithm can. 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 Figure 2: Image inpainting by exact sampling from the posterior under a 2-D thin-membrane prior GMRF model, conditional on the image values at the known sites. From left to right, the masked image (big occluded areas plus 50% missing pixels), the posterior mean, a posterior sample obtained by our perturbed GMRF sampling algorithm, and the sample-based estimate of the posterior standard deviation (square root of the variance) using 20 samples (image values are between 0 and 1). 4 Posterior variance estimation It is often desirable not only to compute the mode ? but also recover aspects of the covariance structure in the posterior distribution. As we have discussed in Sec. 2.1, for very large models the fully-dense covariance matrix ? is impractical to compute or store; however, we might be interested in certain of its elements. For example, the diagonal of ? contains the variance of each variable and thus, along with the mean, fully describes the posterior marginal densities [29]. Marginal vari5 ances also need to be computed in Gaussian subproblems that arise in the context of non-Gaussian sparse Bayesian learning and relevance vector machine models used for regression, classification, and experimental design [11, 26, 32]. For many of these models variance estimation is the main computational bottleneck in applications involving large scale datasets. A number of techniques have been proposed for posterior variance estimation. One approach has been to employ modified conjugate gradient algorithms which allow forming variance estimates in parallel to computing the posterior mode when solving the linear system in Eq. (3) [15, 24, 27]. These techniques utilize the close connection between conjugate gradients and the Lanczos method for determining eigensystems [6, 15] but unfortunately exhibit erratic numerical behavior in practice, especially when applied to large scale problems: loss of orthogonality due to finite numerical precision requires that one holds in memory the entire sequence of Lanczos vectors and periodically reorthogonalize them as the iteration progresses, significantly increasing the memory and time complexity relative to ordinary CG; the variance estimates typically converge much slower than mean estimates; one often has limited freedom in initializing the iteration and/or selecting the preconditioner. We refer to [25] for further information. It is well known that belief propagation computes exact variances in tree-structured GMRFs [36]. However, in graphs with cycles its loopy version typically underestimates the marginal variances since it overcounts the evidence, even when it converges to the correct means [13, 33]. The variance estimator of [28] is only applicable to GMRFs for which just a small number of edges violates the graph?s tree structure. The method in [12] relies on a low-rank approximation of the N ? N unit matrix, carefully adapted to the problem covariance structure, also employing a wavelet hierarchy for models exhibiting long-range dependencies. One then needs to solve as many linear systems as is the approximation rank, which in turn increases with the model size ( [12] reports a rank of 448 for a relatively smooth model with about 106 variables). This technique is thus still relatively expensive and not necessarily generally applicable. The ability to efficiently sample from the Gaussian posterior distribution using the algorithm of Sec. 3 immediately suggests the following Monte Carlo estimator of the posterior covariance matrix ? = 1/S ? XS s=1 (xs ? ?)(xs ? ?)T . (5) If only the posterior variances are required, one will obviously just evaluate and retain the diagonal ? can similarly be obtained. Clearly, of the outer-products in the sum; any other selected elements of ? the proposed estimator is unbiased. Its relative variance p estimation error follows from the properties ? i,i )/?i,i = 2/S. The error drops quite slowly with the of the ?2 distribution and is r = ?(? number of samples (S = 2/r2 samples are required to reach a desired relative error r), so the technique is best suited if rough variance estimates suffice, which is often the case in practical applications [26]; e.g., 50 samples suffice to reduce r to 20%. A desirable property of the estimator is that its accuracy is independent of the problem size N , in contrast to most alternative techniques. The proposed variance estimation technique can thus be readily applied to every GMRF at a cost of S times that of computing ?. We show in Fig. 2 the result of applying the proposed variance estimator for the thin-membrane GMRF example considered in Sec. 2.3; within only 20 samples (computed in 5 sec.) the qualitative structure of the variance in the model has been captured. 5 Block Gibbs sampling in conditionally Gaussian Markov random fields Following the intuition behind Gaussian sampling by local perturbations, one could try to inject noise to the local potentials and find the mode of the perturbed model, even in the presence of nonquadratic MRF factors. Although such a randomization process is interesting on its own right and deserves further study, it is not feasible to design it in a way that leads to single shot algorithms for exact sampling of non-Gaussian MRFs. Without completely abandoning the Gaussian realm, we can get versatile models in which some hidden variables q control the mean and/or variance of the Gaussian factors. Conditional on the values of these hidden variables, the data are still Gaussian YL P (x|q) ? N (flT x; ?l,q , ?l,q ) , (6) l=1 6 where we have dropped the dependence on the measurements y for simplicity. Sampling from this model can be carried out efficiently (but not in a single shot any more) by alternately block sampling from P (x|q) and P (q|x), which typically mixes rapidly and is much more efficient than single-site Gibbs sampling [35]. For large models this is feasible because, given the hidden variables, we can update the visible units collectively using the GMRF sampling by local perturbations algorithm, similarly to [16, 23]. We assume that block sampling of the hidden units given the visible variables is also feasible, by considering their conditional distribution independent or tree-structured [16]. One typically employs one discrete hidden variable ql per factor fl , leading to mixture of Gaussian local experts for which the joint distribution of visible and hidden units is QL PJl P (x, q) ? l=1 j=1 ?l,j N (flT x; ?l,j , ?l,j ) [4, 16, 23, 34]. Intuitively, the discrete latent unit ql turns off the smoothness constraint enforced by the factor fl by assigning a large variance ?l,j to it when an image edge is detected. The block-Gibbs sampler leads to a rapidly mixing Markov chain which after a few burn-in iterations generates a sequence of samples {{x1 , q1 }, . . . , {xS , qS }} that explore the joint distribution ? should be problem dependent. P (x, q). Summarizing the sample sequence into a unique estimate x If we strive for minimizing the estimation?s mean square error as typically is the case in image denoising, our goal should be to induce the posterior mean from the sample sequence [23]. Apart from P ? S = 1/S Ss=1 xs , we can alternatively esthe standard sample-based posterior mean estimator x P ? RB = 1/S Ss=1 E{x|qs } timate the posterior mean with the Rao-Blackwellized (RB) estimator x [16], which offers increased accuracy but requires finding the means of the conditionally Gaussian MRFs P (x|q), typically doubling the cost per step. Beyond MMSE, in applications such as image inpainting or texture synthesis, the posterior mean can be overly smooth and selecting a single sample from the simulation as the solution can be visually more plausible [8], as can be appreciated by comparing the MMSE and sample reconstructions of the textured areas in the inpainting example of Fig. 2. 5 ORIGINAL NOISY, 21.9 dB TV?MAP, 29.0 dB GIBBS SAMPLE, 28.4 dB SAMPLE MEAN, 30.0 dB RAO?BLACK, 30.3 dB 0 ?5 ?10 ?15 ?20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: Signal restoration under a total variation prior model and alternative estimation criteria. The heavy tailed histograms of natural image filter responses are often conveniently approximated by kurtotic continuous parametric distributions [10,19,35]. We can still resort to block Gibbs sampling for efficiently exploring the posterior distribution of the signal x if each expert can be represented as a continuous Gaussian scale mixture (GSM) [2], as has been done before for Student-t experts [35]. Motivated by [14, 23], we show here how this can lead to a novel Bayesian treatment of signal QN ?1 restoration under a total variation (TV) prior P (x) ? l=1 L(?xl ; ?), which imposes an L1 penalty on the signal diferrences ?xl = xl ? xl+1 . We rely on the hierarchical characterization of the Laplacian density L(z; ?) = 1/(2?) exp(?|z|/?) R ?as a GSM in which the variance follows an exponential distribution [2, 17]: L(z; ?) = 1/(2?2 ) 0 N (z; 0, v) exp(?v/?2 )d v. Thanks to the GSM nature of this representation and assuming a Gaussian measurement model, the conditionally Gaussian visible variables are easy to sample. Further, the latent variances vl conditionally decouple  ?1/2 exp ?|?xl |2 /(2vl ) ? vl /(2?2 ) , which can be recognized as a and have density P (vl |x) ? vl generalized inverse Gaussian distribution for which standard sampling routines exist. The derivation above carries over to the 2-D TV model, with the gradient magnitude at each pixel replacing |?xl |. 7 We demonstrate our Bayesian TV restoration method in a signal denoising experiment illustrated in Fig. 3. We synthesized a length-1000 signal by integrating Laplacian noise (? = 1/8), also adding jumps of height 5 at four locations (outliers), and subsequently degraded it by adding Gaussian noise (with variance 1). We depict the standard TV-MAP restoration result, as well as plausible solutions extracted from a 10-step block-Gibbs sampling run with our GSM-based Bayesian algorithm: the 10-th sample itself, and the two MMSE estimates outlined above (sample mean and RB). As expected, the two mean estimators are best in terms of PSNR (with the RB one slightly superior). The standard TV-MAP estimator captures the edges more sharply but has lower PSNR score and produces staircase artifacts. Although the random sample performs the worst in terms of PSNR, it resembles most closely the qualitative properties of the original signal, capturing its fine structure. These findings shed new light in the critical view of [14] on MAP-based denoising. We must emphasize that the block Gibbs sampling strategy outlined above in conjunction with our GMRF sampling by local perturbations algorithm is equally well applicable when the latent variables are distributed, with each hidden variable affecting multiple experts, as illustrated in Fig. 4(a). This situation arises in the context of unsupervised learning of hierarchical models applied on real-valued data, where it is natural to use a Gaussian restricted Boltzmann machine (GRBM) in the first layer of the hierarchy. Training GRBMs with contrastive divergence [7] requires drawing random samples from the model. Sampling the visible layer given the layer of discrete hidden variables is easy if there are no sideways connections between the continuous visible units, as assumed in [9]. To take into account residual correlations among the visible units, the authors of the factored GRBM in [18] drop the conditional independence assumption, but resort to difficult to tune hybrid Monte Carlo (HMC) for sampling. Employing our Gaussian sampling by local perturbations scheme we can efficiently jointly sample the correlated visible units, which allows us to still use the more efficient block-Gibbs sampler in training the model of [18]. To verify this, we have accordingly replaced the sampling module in the publicly available implementation of [18], and have closely followed their setup leaving their model otherwise unchanged. For conditionally Gaussian sampling of the correlated visible units we have used our local perturbation algorithm, coupled with 5 iterations of conjugate gradients running on the GPU. Contrastive divergence training was done on the dataset accompanying their code, which comprises 10240 16 ? 16 color patches randomly extracted from the Berkeley dataset and statistically whitened. The receptive fields learned by this procedure are depicted in Fig. 4(b) and look qualitative the same with those reported in [18], while computation time was reduced by a factor of two. Besides this moderate computation gain, the main interest in perturbed Gaussian sampling in this setup lies in its scalability which offers the potential to move beyond the patch-based representation and sample from whole-image factored GRBM models, similarly to what has been recently achieved in [23] for the field of independent experts model [19]. q1 q2 f1 x1 q3 f2 x2 q4 f3 x3 qJ f4 fL xN (a) (b) Figure 4: (a) Each hidden unit can control a single factor (such as the q1 above) or it can affect multiple experts, resulting to models with distributed latent representations. (b) The visible-to-factor filters arising in the factored GRBM model of [18], as learned using block Gibbs sampling. Acknowledgments This work was supported by the NSF award 0917141 and the AFOSR grant 9550-08-1-0489. 8 References [1] D. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann machines. Cogn. Science, 9(1):147?169, 1985. [2] D. Andrews and C. Mallows. Scale mixtures of normal distributions. JRSS (B), 36(1):99?102, 1974. [3] J. Besag. Spatial interaction and the statistical analysis of lattice systems. JRSS (B), 36(2):192?236, 1974. [4] D. Geman and C. Yang. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process., 4(7):932?946, 1995. [5] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. PAMI, 6(6):721?741, 1984. [6] G. Golub and C. Van Loan. Matrix Computations. John Hopkins Press, 1996. [7] G. Hinton. Training products of experts by minimizing contrastive divergence. Neur. Comp., 14(8):1771? 1800, 2002. [8] A. Kokaram. Motion Picture Restoration. Springer, 1998. [9] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proc. ICML, 2009. [10] S. Lyu and E. Simoncelli. Modeling multiscale subbands of photographic images with fields of Gaussian scale mixtures. IEEE Trans. PAMI, 31(4):693?706, Apr. 2009. [11] D. MacKay. Bayesian interpolation. Neur. Comp., 4(3):415?447, 1992. [12] D. Malioutov, J. Johnson, M. Choi, and A. Willsky. Low-rank variance approximation in GMRF models: Single and multiscale approaches. IEEE Trans. Signal Process., 56(10):4621?4634, Oct. 2008. [13] D. Malioutov, J. Johnson, and A. Willsky. Walk sums and belief propagation in Gaussian graphical models. J. of Mach. Learning Res., 7:2031?2064, 2006. [14] M. Nikolova. Model distortions in Bayesian MAP reconstruction. Inv. Pr. and Imag., 1(2):399?422, 2007. [15] C. Paige and M. Saunders. LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Trans. on Math. Software, 8(1):43?71, 1982. [16] G. Papandreou, P. Maragos, and A. Kokaram. Image inpainting with a wavelet domain hidden Markov tree model. In Proc. ICASSP, pages 773?776, 2008. [17] T. Park and G. Casella. The Bayesian lasso. J. of the Amer. Stat. Assoc., 103(482):681?686, 2008. [18] M. Ranzato, A. Krizhevsky, and G. Hinton. Factored 3-way restricted Boltzmann machines for modeling natural images. In Proc. AISTATS, 2010. [19] S. Roth and M. Black. Fields of experts. Int. J. of Comp. Vis., 82(2):205?229, 2009. [20] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neur. Comp., 11:305?345, 1999. [21] H. Rue. Fast sampling of Gaussian Markov random fields. JRSS (B), 63(2):325?338, 2001. [22] H. Rue and L. Held. Gaussian Markov random fields. Theory and Applications. Chapman & Hall, 2005. [23] U. Schmidt, Q. Gao, and S. Roth. A generative perspective on MRFs in low-level vision. In CVPR, 2010. [24] M. Schneider and A. Willsky. Krylov subspace estimation. SIAM J. Sci. Comp., 22(5):1840?1864, 2001. [25] M. Seeger and H. Nickisch. Large scale variational inference and experimental design for sparse generalized linear models. Technical Report TR-175, MPI for Biological Cybernetics, 2008. [26] M. Seeger, H. Nickisch, R. Pohmann, and B. Sch?olkopf. Bayesian experimental design of magnetic resonance imaging sequences. In NIPS, pages 1441?1448, 2008. [27] J. Skilling. Bayesian numerical analysis. In W. Grandy and P. Milonni, editors, Physics and Probability, pages 207?221. Cambridge Univ. Press, 1993. [28] E. Sudderth, M. Wainwright, and A. Willsky. Embedded trees: Estimation of Gaussian processes on graphs with cycles. IEEE Trans. Signal Process., 52(11):3136?3150, Nov. 2004. [29] R. Szeliski. Bayesian modeling of uncertainty in low-level vision. Int. J. of Comp. Vis., 5(3):271?301, 1990. [30] R. Szeliski and D. Terzopoulos. From splines to fractals. In Proc. ACM SIGGRAPH, pages 51?60, 1989. [31] D. Terzopoulos. The computation of visible-surface representations. IEEE Trans. PAMI, 10(4):417?438, 1988. [32] M. Tipping. Sparse Bayesian learning and the relevance vector machine. J. of Mach. Learning Res., 1:211?244, 2001. [33] Y. Weiss and W. Freeman. Correctness of belief propagation in Gaussian graphical models of arbitrary topology. Neur. Comp., 13(10):2173?2200, 2001. [34] Y. Weiss and W. Freeman. What makes a good model of natural images? In CVPR, 2007. [35] M. Welling, G. Hinton, and S. Osindero. Learning sparse topographic representations with products of Student-t distributions. In NIPS, 2002. [36] A. Willsky. Multiresolution Markov models for signal and image processing. Proc. IEEE, 90(8):1396? 1458, 2002. [37] S. Zhu, Y. Wu, and D. Mumford. Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling. Int. J. of Comp. Vis., 27(2):107?126, 1998. 9
3940 |@word version:2 additively:1 simulation:2 covariance:9 contrastive:3 q1:3 inpainting:6 versatile:1 shot:4 recursively:1 carry:1 reduction:2 tr:1 configuration:1 contains:2 score:1 selecting:2 mmse:3 existing:1 current:1 comparing:2 yet:1 assigning:1 must:1 gpu:2 readily:2 john:1 visible:13 numerical:3 periodically:1 drop:2 update:3 depict:1 stationary:1 half:1 selected:1 generative:1 parameterization:1 accordingly:1 characterization:2 math:1 node:1 location:1 gx:2 height:1 blackwellized:1 along:1 direct:5 qualitative:3 fitting:1 overhead:1 manner:1 expected:1 indeed:1 rapid:1 behavior:1 examine:1 simulator:1 freeman:2 solver:1 increasing:1 considering:1 provided:2 moreover:1 suffice:4 matched:1 what:2 cm:1 interpreted:1 multigrid:5 q2:1 developed:2 unified:1 finding:4 juxtaposing:1 impractical:1 berkeley:1 every:1 shed:1 prohibitively:1 rm:1 filterbank:2 assoc:1 zl:2 unit:13 control:2 grant:1 imag:1 positive:3 before:2 dropped:1 local:13 mach:2 interpolation:1 approximately:2 pami:3 black:3 plus:1 might:1 f1t:1 studied:2 burn:1 resembles:1 suggests:1 challenging:1 factorization:2 limited:1 range:1 statistically:1 abandoning:1 practical:2 unique:1 acknowledgment:1 mallow:1 practice:3 block:16 definite:2 union:2 x3:2 cogn:1 procedure:3 problemdependent:1 area:2 significantly:1 convenient:2 word:1 induce:1 integrating:1 get:1 cannot:2 close:2 context:7 applying:1 equivalent:1 map:7 yt:1 missing:1 roth:2 straightforward:2 attention:1 independently:4 simplicity:1 gmrf:28 recovery:2 immediately:1 kokaram:2 factored:6 estimator:12 rule:1 q:2 fill:1 grbm:6 handle:1 variation:5 laplace:1 hierarchy:2 play:1 exact:9 homogeneous:2 element:4 expensive:3 particularly:4 approximated:1 geman:3 observed:1 role:1 module:1 ackley:1 initializing:1 capture:4 worst:1 connected:1 cycle:2 ranzato:1 intuition:1 complexity:6 occluded:3 solving:5 yuille:2 f2:2 completely:1 textured:1 htm:1 joint:2 icassp:1 siggraph:1 represented:2 derivation:1 univ:1 fast:2 describe:1 effective:2 monte:4 sejnowski:1 detected:1 kty:1 saunders:1 whose:2 richer:1 quite:2 valued:3 cvpr:2 solve:2 drawing:4 s:2 compressed:1 plausible:2 ability:1 statistic:4 otherwise:1 topographic:1 distortion:1 jointly:3 itself:2 noisy:2 transform:1 obviously:1 sequence:5 reconstruction:2 interaction:3 product:5 neighboring:1 rapidly:2 iff:2 degenerate:1 achieve:1 mixing:1 roweis:1 multiresolution:1 intuitive:1 ky:1 scalability:1 los:2 olkopf:1 convergence:1 requirement:1 produce:2 leave:1 converges:1 help:1 coupling:1 wider:1 andrew:1 stat:3 progress:1 eq:10 auxiliary:1 recovering:1 involves:4 implies:3 come:2 exhibiting:1 closely:2 correct:1 filter:22 f4:2 subsequently:1 stochastic:1 occupies:1 violates:1 hx:2 multilevel:1 f1:3 suffices:1 randomization:1 biological:1 extension:1 pl:3 exploring:1 hold:1 accompanying:1 considered:2 hall:1 normal:1 visually:1 exp:8 lyu:1 jx:9 purpose:1 estimation:11 proc:5 injecting:3 applicable:6 correctness:1 tool:2 sideways:1 minimization:2 rough:2 clearly:2 gaussian:75 modified:1 avoid:1 factorizes:1 conjunction:3 q3:1 rank:5 likelihood:7 contrast:1 besag:1 cg:7 seeger:2 summarizing:1 inference:4 mrfs:9 dependent:2 vl:5 typically:12 integrated:1 entire:1 hidden:15 interested:1 pixel:3 classification:2 among:2 resonance:1 spatial:6 integration:1 special:1 mackay:1 marginal:8 field:14 comprise:1 f3:2 ng:1 sampling:51 atom:1 identical:1 chapman:1 park:1 look:1 unsupervised:3 icml:1 thin:5 report:2 spline:1 employ:6 few:1 randomly:3 divergence:3 individual:1 replaced:1 versatility:1 freedom:1 interest:1 message:1 highly:1 golub:1 mixture:9 semidefinite:1 yielding:1 behind:1 light:1 held:1 chain:3 kt:1 edge:4 respective:2 tree:6 walk:1 re:3 desired:2 minimal:1 increased:1 modeling:12 rao:2 kurtotic:1 grbms:1 papandreou:2 unmodified:1 restoration:7 assignment:4 lattice:2 loopy:3 lanczos:2 ordinary:1 deviation:1 subset:2 entry:2 cost:2 masked:1 deserves:1 krizhevsky:1 successful:1 inadequate:1 johnson:2 osindero:1 reported:1 dependency:2 perturbed:8 nickisch:2 thanks:1 density:11 siam:1 retain:1 lee:1 probabilistic:2 physic:3 yl:4 off:1 invertible:3 synthesis:2 continuously:1 ym:1 hopkins:1 central:1 slowly:1 stochastically:1 expert:13 resort:3 derivative:2 inject:2 leading:1 strive:1 account:2 potential:9 sec:16 student:2 coefficient:1 int:3 explicitly:2 mp:1 vi:3 root:1 try:1 closed:1 view:1 linked:1 bayes:1 recover:2 parallel:1 contribution:1 ass:1 square:3 ni:1 accuracy:3 publicly:1 variance:33 largely:1 efficiently:7 minimize:1 yield:2 correspond:2 degraded:1 convolutional:1 conceptually:1 bayesian:15 carlo:4 comp:8 cybernetics:1 malioutov:2 gsm:7 reach:1 casella:1 underestimate:1 energy:4 involved:1 naturally:1 recovers:1 couple:1 sampled:2 timate:1 proved:1 treatment:2 dataset:2 gain:1 recall:1 realm:1 color:1 psnr:3 segmentation:1 routine:2 sophisticated:1 carefully:1 back:1 originally:1 tipping:1 follow:1 response:9 wei:2 formulation:1 done:3 box:1 amer:1 just:2 correlation:2 preconditioner:2 working:2 hand:1 replacing:2 grandy:1 nonlinear:2 multiscale:2 propagation:4 mode:11 quality:1 artifact:1 building:1 requiring:1 unbiased:2 staircase:1 verify:1 regularization:2 spatially:1 symmetric:3 illustrated:2 conditionally:13 attractive:1 adjacent:1 mpi:1 criterion:1 generalized:2 demonstrate:3 performs:1 l1:1 bring:1 motion:1 image:35 wise:1 variational:2 novel:2 recently:4 superior:1 ji:1 perturbing:2 jl:4 discussed:6 tail:1 interpretation:1 synthesized:1 measurement:5 refer:1 cambridge:1 gibbs:16 dft:1 smoothness:1 unconstrained:1 outlined:2 similarly:3 reliability:1 impressive:1 surface:1 gt:3 ktx:1 multivariate:4 posterior:33 own:2 perspective:1 moderate:1 apart:3 store:1 certain:4 arbitrarily:1 ht1:1 seen:2 captured:1 george:1 somewhat:1 schneider:1 employed:1 recognized:1 determine:1 converge:2 signal:13 multiple:4 photographic:1 simoncelli:1 full:1 desirable:2 eigensystems:1 alan:1 smooth:2 mix:1 characterized:2 technical:1 offer:2 pde:3 long:1 host:1 equally:3 visit:1 jy:4 award:1 laplacian:3 mrf:2 regression:1 involving:1 whitened:1 essentially:2 vision:3 scalable:1 histogram:4 iteration:8 kernel:2 achieved:1 c1:1 affecting:2 fine:1 diagram:3 singular:1 leaving:1 sudderth:1 sch:1 pohmann:1 unlike:1 induced:1 db:5 presence:3 yang:1 split:1 enough:1 easy:2 nikolova:1 xj:1 independence:1 psychology:1 affect:1 bandwidth:2 lasso:1 topology:1 reduce:1 angeles:2 qj:1 bottleneck:1 motivated:1 nonquadratic:1 penalty:1 paige:1 deep:2 fractal:1 useful:3 generally:4 tune:1 amount:3 transforms:1 locally:1 induces:1 simplest:1 reduced:2 exist:3 nsf:1 arising:5 per:4 rb:4 overly:1 discrete:7 express:1 four:1 nevertheless:1 drawn:2 verified:1 ht:1 utilize:1 imaging:1 graph:5 relaxation:1 sum:2 enforced:1 run:1 inverse:1 powerful:4 uncertainty:1 wu:1 patch:3 draw:3 capturing:1 layer:5 fl:10 followed:4 guaranteed:1 tackled:1 correspondence:1 quadratic:5 adapted:1 orthogonality:1 constraint:1 sharply:1 normalizable:1 x2:2 bp:3 software:1 ucla:2 generates:1 aspect:2 speed:3 fourier:2 extremely:1 relatively:4 department:1 tv:7 structured:4 neur:4 combination:1 conjugate:5 membrane:5 describes:1 slightly:1 jr:3 intuitively:1 restricted:5 outlier:1 pr:1 handling:1 computationally:1 equation:3 previously:1 describing:3 discus:2 fail:1 turn:2 letting:1 sending:1 available:1 gaussians:2 operation:1 apply:2 subbands:1 hierarchical:3 generic:1 magnetic:1 skilling:1 alternative:2 schmidt:1 slower:1 original:2 denotes:1 remaining:1 running:1 graphical:4 unifying:1 perturb:3 especially:3 ghahramani:1 unchanged:1 backprojection:1 move:1 quantity:1 mumford:1 parametric:1 strategy:1 dependence:1 receptive:1 diagonal:5 exhibit:1 gradient:6 subspace:2 sci:1 outer:1 extent:1 willsky:5 assuming:1 length:3 kalman:1 code:1 besides:1 demonstration:1 minimizing:2 equivalently:2 setup:4 hmc:1 mostly:1 unfortunately:1 ql:3 potentially:2 difficult:1 subproblems:1 gk:2 rise:1 synthesizing:1 implementation:2 design:4 boltzmann:5 allowing:1 convolution:1 markov:11 datasets:1 finite:2 situation:1 hinton:4 y1:1 rn:1 perturbation:11 frame:1 arbitrary:1 community:1 inv:1 introduced:1 inverting:1 required:2 kl:3 connection:2 california:2 learned:2 alternately:3 trans:7 nip:2 beyond:3 able:1 krylov:1 appeared:2 built:1 memory:4 erratic:1 belief:6 wainwright:1 power:2 critical:1 natural:6 hybrid:2 rely:1 residual:2 zhu:1 scheme:1 picture:1 carried:5 hm:1 coupled:2 gmrfs:11 prior:13 literature:1 review:1 multiplication:4 determining:1 relative:3 afosr:1 embedded:1 fully:4 loss:1 highlight:1 interesting:2 proven:1 imposes:1 editor:1 bank:1 heavy:3 translation:1 row:2 placed:1 supported:1 keeping:1 appreciated:1 side:1 allow:3 terzopoulos:2 circulant:1 neighbor:1 szeliski:2 sparse:8 distributed:9 benefit:1 van:1 xn:2 computes:1 qn:1 author:1 jump:1 employing:3 welling:1 ranganath:1 approximate:1 emphasize:2 nov:1 ances:1 clique:2 global:1 reveals:1 q4:1 assumed:2 xi:1 alternatively:1 continuous:8 iterative:5 latent:11 tailed:2 thankfully:1 nature:2 pjl:1 inherently:1 complex:1 cl:1 constructing:1 rue:2 diag:2 domain:2 necessarily:1 apr:1 main:3 g1t:1 dense:2 linearly:1 big:1 noise:7 arise:2 whole:1 x1:3 site:6 fig:11 tl:2 grosse:1 slow:1 precision:8 comprises:1 exponential:1 xl:6 lie:1 clamped:1 aistats:1 wavelet:3 choi:1 xt:4 sensing:1 r2:1 x:11 flt:5 evidence:1 naively:1 adding:2 effectively:3 mirror:1 texture:4 magnitude:1 depts:1 kx:1 sparser:1 suited:3 entropy:1 depicted:2 likely:2 explore:1 forming:1 gao:1 conveniently:1 highlighting:1 doubling:1 applies:2 collectively:2 springer:1 corresponds:2 relies:3 extracted:2 acm:2 oct:1 conditional:5 sized:1 goal:1 towards:1 replace:1 shared:1 feasible:3 hard:1 loan:1 infinite:1 specifically:2 typical:1 sampler:6 denoising:3 decouple:1 total:3 experimental:3 cholesky:1 support:2 arises:1 relevance:2 incorporate:1 evaluate:1 mcmc:1 correlated:2
3,247
3,941
Avoiding False Positive in Multi-Instance Learning Yanjun Han, Qing Tao, Jue Wang Institute of Automation, Chinese Academy of Sciences Beijing, 100190, China yanjun.han, qing.tao, [email protected] Abstract In multi-instance learning, there are two kinds of prediction failure, i.e., false negative and false positive. Current research mainly focus on avoiding the former. We attempt to utilize the geometric distribution of instances inside positive bags to avoid both the former and the latter. Based on kernel principal component analysis, we define a projection constraint for each positive bag to classify its constituent instances far away from the separating hyperplane while place positive instances and negative instances at opposite sides. We apply the Constrained Concave-Convex Procedure to solve the resulted problem. Empirical results demonstrate that our approach offers improved generalization performance. 1 Introduction Multi-instance Learning (MIL) was first proposed by Dietterich et.al. in [1] to predict the binding ability of a drug from its biochemical structure. A certain drug molecule corresponds to a set of conformations which cannot be differentiated via chemical experiments. A drug is labeled positive if any of its constituent conformations has the binding ability greater than the threshold, otherwise negative. Therefore, each sample (a drug) is a bag of instances (its constituent conformations). In multi-instance learning the label information for positive samples is incomplete in that the instances in a certain positive bag are all labeled positive. Generally, methods for multi-instance learning are modified versions of approaches for supervised learning by shifting the focus from discrimination on instances to discrimination on bags. The earliest exploration were the APR algorithms proposed in [1]. From then on, a number of approaches emerged. Examples include Diverse Density [2], Citation k?NN [3], MI-SVMs [4], MIkernels [5], reg-SVM [6], MissSVM [7], sbMIL, stMIL [8], PPMM [9], MIGraphs [10], etc. Many real-world applications can be regarded as Multi-instance learning problems. Examples include image classification [11], document categorization [12], computer aided diagnosis [13], etc. As far as positive bags are concerned, current research usually treat them as labyrinths in which witnesses (responsible positive instances) are encaged, and consider nonwitnesses (other instances) therein to be useless or even distractive. The information carried by nonwitnesses is not well utilized. Factually, nonwitnesses are indispensable for characterizing the overall instance distribution, and thus help to improve the learner. Several researchers realized the importance of nonwitnesses and attempted to utilize them. In MI-kernels [5] and reg-SVM [6], nonwitnesses together with witnesses are squeezed into the kernel matrix. In mi-SVM [4], the labels of all nonwitnesses are treated as unknown integer variables to be optimized. mi-SVM tends to misclassify negative instances in positive bags since the resulted margin will be larger. And we will elaborate on this flaw in section 3.1. In MissSVM [7] and stMIL [8], multi-instance learning is addressed from the view of semisupervised learning, and nonwitnesses are treated as unlabeled data, whose labels should be assigned to maximize the margin. sbMIL [8] attempt to estimate the ratio of positive instances inside positive bags and utilize this information in the subsequent classification. MissSVM, sbMIL and stMIL suffer from the same flaw as mi-SVM. 1 Figure 1: Illustration of the False Positive Phenomenon: The top image is a positive training sample, and the bottom image is a negative testing sample. The symbol ? and ? respectively denote positive and negative instances. Enveloped points are instances in a positive bag. The Point not enveloped is a negative bag of just one instance. Separating plane Fi corresponds to f (x) = i, and Gi corresponds to g(x) = i. The learners f and g are obtained with and without the projection constraint, respectively. Instances are labeled according to f . For details, please refer to the passage below. The neglect of nonwitnesses in positive bags may lead to false positive and cause a model to misclassify unseen negative samples. For example, in natural scene classification, each image is segmented to a bag of instances beforehand, and each instance is a patch (ROI, Regions Of Interest) characterized by one feature vector describing its color. The task is to predict whether an image contains a waterfall or not (Figure 1). A positive image contains some positive instances corresponding to waterfall and some negative instances from other categories such as sky, stone, grass, etc., while a negative bag exclusively contains negative instances from other categories. Naturally, some negative instances (patches) only exist in positive bags. For instance, the end of a waterfall is often surrounded by mist. The aforementioned approaches tend to misclassify negative instances in positive bags. Therefore, the patch corresponding to mist is misclassified as positive. Given an unseen image with cirrus cloud and without waterfall, the obtained learner will misclassify this image as positive because cirrus cloud and mist are similar to each other. To avoid both false negative and false positive, we attempt to classify instances inside positive bags far from the separating hyperplane and place positive and negative instances at opposite sides. We achieve this by introducing projection constraints based on kernel principal component analysis into MI-SVM [4]. Each constraint is defined on a positive bag to encourage large variance of its constituent instances along the normal direction of the separating hyperplane. We apply the Constrained Concave-Convex Procedure (CCCP) to solve the resulted optimization problems. The remainder of the paper is organized as follows: Section 2 introduces notation convention and the CCCP. In Section 3 we bring out the projection constraint and the corresponding formulation for multi-instance learning. In Section 4, the algorithm is evaluated on real world data sets. Finally, conclusions are drawn in Section 5. 2 2.1 Preliminaries Notation Convention The origin of multi-instance learning [1] has been presented in section 1. Let X ? Rp be the th space containing instances and D = {(Bi , yi )}m bag of i=1 be the training data, where Bi is the i instances {xi1 , ? ? ? , xini } and yi ? Y is the label for Bi . Y is {+1, ?1} for classification and R for regression. In addition, denote the index set for instances xij of Bi by Ii . The task is to train 2 a learner to predict the label of an unseen bag. Compared with traditional supervised learning, the learner is a mapping from 2X to Y instead of from X to Y. Denote the index sets for positive and negative bags by I+ and I? respectively. Without loss of generality, assume that the instances are ordered in the sequence {x11 , ? ? ? , x1n1 , ? ? ? , xm1 , ? ? ? , xmnm }. We index instances by a function i?1 i?1 i?1 ? ? ? I(xij ) = nk + j. And I(Bi ) returns a vector ( nk + 1, ? ? ? , nk + ni ). k=1 2.2 k=1 k=1 Constrained Concave-Convex Procedure Non-convex optimizations are undesirable because few algorithms effectively converge even to a local optimum. However, if both objective function and constraints take the form of a difference between two convex functions, then a non-convex problem can be solved efficiently by the constrained concave-convex procedure [14]. The fundamental is to eliminate the non-convexity by changing non-convex parts to their first-order Taylor expansions. The original problem is as follows: min f0 (x) ? g0 (x) x s.t. fi (x) ? gi (x) ? ci , i = 1, ? ? ? , m (1) where fi , gi (i = 0, ? ? ? , m) are real-valued, convex and differentiable functions on Rn . Starting from a random x(0) , (1) is approximated by a sequence of successive convex optimization problems. At the t + 1th iteration, the non-convex parts in the objective and constraints are substituted by their first-order Taylor expansions, and the resulted optimization problem is as follows: [ ] min f0 (x) ? g0 (x(t) ) + ?g0 (x(t) )T (x ? x(t) ) (2) x [ ] s.t. fi (x) ? gi (x(t) ) + ?gi (x(t) )T (x ? x(t) ) ? ci where x(t) is the optimal solution to (2) at the tth iteration. The above procedure is repeated until convergence. In [14] it is proved that the CCCP converges to a local optimum of (1). 3 3.1 Multi-Instance Classification Support Vector Machine Formulation Our work is based on the support vector machine (SVM) formulations for multi-instance learning, to be exact, the MI-SVM [4] as follows: [? ] ? 1 min ?w?2 + C ?i + ?ij (3) w,b,? 2 i?I+ j?Ii ,i?I? s.t. max(w xij + b) ? 1 ? ?i , ?i ? 0, i ? I+ T j?Ii ? wT xij ? b ? 1 ? ?ij , ?ij ? 0, j ? Ii , i ? I? Compared with the conventional SVM, in MI-SVM the notion of slack variables for positive samples is extended from individual instances to bags while that for negative samples remains unchanged. As shown by the first set of max constraints, only the ?most positive? instance in a positive bag, or the witness, could affect the margin. And other instances, or nonwitnesses, become irrelevant for determining the position of the separating plane once the witness is specified. The max constraint at first sight seems to well embody the characteristic of multi-instance learning. Indeed, it helps to avoid the false negative, i,e., the misclassification of positive samples. However, it may incur false positive due to the following two reasons. Firstly, the max constraint aims at discovering the witness, and tends to skip nonwitnesses. Thus each positive bag is approximately oversimplified to a single pattern, i.e., the witness. Most information in positive bags is wasted. Secondly, due to the characteristic of the max function and the greediness of optimization methods, the predictions of nonwitnesses are often adjusted above zero in the learning process. Besides, there is no mechanism to draw the predictions of nonwitenesses below zero. Nevertheless, many nonwitnesses in positive bags are factually negative instances. For example, in natural scene classification, 3 many image patches in a positive bag are from the irrelevant background; in document categorization, many posts in a positive bag are not from the target category. Hence, many nonwitnesses are mislabeled as positive, and we obtain a falsely large margin. As shown in Figure 1, MI-SVM classifies half instances in the training sample as positive, and some negative instances are mislabeled. This false positive will impair the generalization performance. 3.2 Projection Constraint The above problem is not unique for MI-SVM. Any approach without properly utilizing nonwitnesses has the same problem. In our preliminary work before this paper, we tried three solutions. Firstly, we treat the labels of all nonwitnesses as unknown integer variables to be optimized. In the SVM framework, it is exactly the mi-SVM [4] as follows: ? 1 ?ij (4) min min ?w?2 + C j?Ii , i?I+ ?I? {yij } w,b,? 2 s.t. yij (wT xij + b) ? 1 ? ?ij , ?ij ? 0, j ? Ii , i ? I+ ? yij + 1 ? 1, i ? I+ 2 j?Ii ? wT xij ? b ? 1 ? ?ij , ?ij ? 0, j ? Ii , i ? I? It seems that assigning labels over all nonwitnesses should lead to a reasonable model. Nevertheless, nonwitnesses are usually labeled positive since the consequent margin will be larger. Thus, many of nonwitnesses are misclassified. As far as the example in Figure 1 is concerned, the obtained learner is g(x) instead of f (x). MissSVM [7] takes an unsupervised approach. For every instance in positive bags, two slack variables are introduced, measuring the distances from the instance to the positive boundary f (x) = +1 and the negative boundary f (x) = ?1 respectively, and the label of the instance depends on the smaller slack variable. stMIL [8] takes a similar approach. As miSVM, MissSVM and stMIL also suffers from misclassification of nonwitnesses. sbMIL [8] tackles multi-instance learning in two steps. The first step is similar to MI-SVM, and the second step is a traditional SVM. Still, there is no mechanism in sbMIL to avoid false positive. In the second solution, we simultaneously seek for the ?most positive? instance and the ?most negative? instance in a positive bag by adding the following constraints to (3): (?1) ? min(wT xij + b) ? ?1 ? ?i , ?i ? 0, i ? I+ j?Ii (5) ? ? And the term i?I+ ?i in the objective of (3) is changed to i?I+ (?i + ?i ). Although misclassification of nonwitnesses is alleviated since at least the ?most negative? nonwitness is classified correctly, the information carried by most nonwitnesses are not fully utilized. As far as the example in Figure 1 is concerned, the obtained learner is still g(x) instead of f (x). Besides, this solution is not appropriate for applications which involve positive bags only with positive instances. The third solution is the projection constraint proposed in this paper. In a maximum margin framework we want to classify instances in a positive bag far away from the separating hyperplane while place positive instances and negative instances at opposite sides. From another point of view, in the feature (kernel) space, we want to maximize the variance of instances in a positive bag along w, the normal vector of the separating hyperplane. Therefore, the principal component analysis (PCA) [15] is just the technique that we need. To tackle complicated real world datasets, we directly develop our approach in the Reproducing Kernel Hilbert Space (RKHS). Let X be the space of instances, and H be a RKHS of functions f : X ? R with associated kernel function k(?, ?). Note that f is both a function on X and a vector in H. With an abuse of notation, we will not differentiate them unless necessary. Denote the RKHS norm of H by ?f ?H . Then MI-SVM can be rewritten as follows: [? ] ? 1 min ?f ?2 + C ?i + ?ij f ?H,? 2 i?I+ j?Ii ,i?I? s.t. max(f (xij )) ? 1 ? ?i , ?i ? 0, i ? I+ j?Ii ? (f (xij )) ? 1 ? ?ij , ?ij ? 0, j ? Ii , i ? I? 4 (6) ? ? ? ? ? ?? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ? ? ? G+1 ? ? ? ? ? F?1 F0 F+1 G0 G?1 ? Figure 2: Illustration of the Effect of the Projection Constraint: Please note that the projection constraint is effective for datasets with any geometric distribution once an appropriate kernel is selected. Enveloped points are instances in a positive bag. Points not enveloped are negative bags of just one instance. Separating plane Fi corresponds to f (x) = i, and Gi corresponds to g(x) = i. The learner f and g are obtained with and without the projection constraint, respectively. Instances are labeled according to f . ? and ? denote positive instances and negative instance respectively. According to the representer theorem [16], each minimizer f ? H of (6) has the following form: ? ? f= ?ij ?(xij ) (7) i?I+ ?I? j?Ii where all ?i ? R, and ?(?) induced by k(?, ?) is the feature mapping from X to H. Next, we will propose our key contribution, i.e., the projection constraint. Given a positive bag i Bi , denote its instances by {xij }nj=1 , and denote the normal vector of the separating plane in the RKHS by f . According to the theory of PCA [15, 17], maximizing the variance of mapped instances i {?(xij )}nj=1 along f equals to minimizing the sum of the Euclidean distances from the centralized data points to their projections on the normalized vector ?ff?2 , as follows: Ji (f ) = ni ? j=1 where ?(mi ) = 1 ni ni ? j=1 ?cj f ? (?(xij ) ? ?(mi ))?22 ?f ?2 (8) i ?(xij ), the mean of {?(xij )}nj=1 . |cj | is the distance from ?(mi ) to the projection point of ?(xij ). After simple algebra, we get: cj = fT (?(xij ) ? ?(mi )) ?f ?2 (9) Substituting (9) and (7) into (8), we arrive at: ?T L2i ? (10) ?T K? where K is a n ? n kernel matrix defined on all the instances of both positive bags and negative bags, oi = trace(KI(Bi ) ) ? n1i 1T KI(Bi ) 1 where KI(Bi ) is a ni ? ni matrix formed by extracting the I(Bi ) columns (Please refer to section 2.1) and the I(Bi ) rows of the overall kernel matrix K, and L2i is the ?centralized? L2i as follows: Ji (?) = oi ? L2i = LTi Li ? 1n LTi Li ? LTi Li 1n + 1n LTi Li 1n 5 (11) where 1n is a matrix with all elements equal to n1 , and Li is a n ? n matrix formed by keeping the I(Bi ) rows of K and setting all the elements in other rows to 0: { K(p, q) if p ? I(Bi ), ?q ? {1, ? ? ? , n} Li (p, q) = 0 otherwise Generally, the optimal normal vector f varies for different positive bags. Hence it is meaningless to solve (10) for its optimum. Instead, we average (10) by the bag size ni , and use a common threshold ? to bound the averaged projection distance for different bags from above. We name the obtained inequality ?the projection constraint?, as follows: ?T L2 ? ) 1( oi ? T i ?? ni ? K? (12) This is equivalent to bounding variance of instances in positive bags along f from below [15]. Substituting (7) into (6), and adding the projection constraint (12) for each positive bag to the resulted problem, we arrive at the following optimization problem: [? ] ? 1 min ?T K? + C ?i + ?ij (13) ?,b,? 2 i?I+ s.t. 1 ? ?i ? j?Ii ,i?I? max(kTI(xij ) ? j?Ii + b) ? 0, ?i ? 0, i ? I+ kTI(xij ) ? + b ? ?1 + ?ij , ?ij ? 0, j ? Ii , i ? I? ?T (oi ? K ? L2i )? ? ?ni ? ?T K? ? 0, i ? I+ 3.3 Optimization via the CCCP In the problem (13), the objective function and the second set of constraints are convex. The first set of constraints are all in the form of difference of two convex functions since the max function is convex. According to the definition of Ji (f ) in (8), J(?) in (10) is not less than 0 for any ?. Thus for any i ? I+ , oi ? K ? L2i is semi-definite positive. Consequently, the third set of constraints are all in the form of difference of two convex functions. Therefore, we can apply the Constrained Concave-Convex Procedure (CCCP) introduced in section 2.2 to solve the problem (13). Since the function max in the first set of constraints is nonsmooth, we have to change gradients to subgradients to use the CCCP. The subgradient is usually not unique, and we adopt the definition used in [6] for the subgradient of max kTI(xij ) ?: j?Ii ?(max kTI(xij ) ?) = j?Ii { where ?ij = ? ?ij kTI(xij ) (14) j?Ii if kTI(xij ) ? ?= max kTI(xij ) ? 0 j?Ii 1 na otherwise (15) where na is the number of xij that maximize kTI(xij ) ?. At the tth iteration, denote the current (t) estimate for ? and ?ij by ?(t) and ?ij respectively. Then the first order Taylor expansion of max kTI(xij ) ? is as follows: j?Ii max kTI(xij ) ?(t) + j?Ii According to (15), we have ? ? (t) ?ij kTI(xij ) (? ? ?(t) ) (16) j?Ii (t) ?ij kTI(xij ) ?(t) = max(kTI(xij ) ?(t) ) j?Ii j?Ii 6 (17) Using (17), (16) reduces to ? (t) ?ij kTI(xij ) ? (18) j?Ii Replacing max kTI(xij ) ? in the first set of constraints by (18) and ?T K? in the third set of conj?Ii straints by their first order Taylor expansions, finally we get: [? ] ? 1 min ?T K? + C ?i + ?i ?,b,? 2 i?I+ i?Ii ,I?I? ? (t) T s.t. 1 ? ?i ? ( ?ij kI(xij ) ? + b) ? 0, ?i ? 0, i ? I+ (19) j?Ii kTI(xij ) ? + b ? ?1 + ?ij , ?ij ? 0, j ? Ii , i ? I? T ?T Si ? ? 2?ni ? ?(t) K(? ? ?(t) ) ? 0, i ? I+ where Si = oi ? K ? L2i . The problem (19) is a quadratically constrained quadratic program (QCQP) with a convex objective function and convex constraints, and thus can be readily solved via interior point methods [18]. Following the CCCP, we can do the iteration until (19) converges. 4 4.1 Experiments Classification: Benchmark Benchmark data sets comes from two areas. Musk 1 and Musk 2 data sets [1] are two biochemical tasks which directly promoted the research of multi-instance learning. The aim is to predict activity of drugs from structural information. Each drug molecule is a bag of potential conformations (instances). The Musk 1 data set consists of 47 positive bags, 45 negative bags, and totally 476 instances. The Musk 2 data set consists of 39 positive bags, 63 negative bags, and totally 6598 instances. Each instance is represented by a 166 dimensional vector. Elephant, tiger and fox are three data sets from image categorization. The aim is to differentiate images with elephant, tiger, and fox [4] from those without, respectively. A bag here is a group of ROIs (Region Of Interests) drawn from a certain image. Each data set contains 100 positive bags and 100 negative bags, and each ROI as an instance is a 230 dimensional vector. Related methods for comparison includes Diverse Table 1: Test Accuracy(%) On Benchmark: respectively. Algorithm Musk 1 90.6 PC-SVM ?2.7 90.0 MIGraph ?3.8 88.9 miGraph ?3.3 88.0 MI-Kernel ?3.1 MI-SVM 77.9 stMIL 79.5 sbMIL 91.8 DD 88.0 EM-DD 84.8 Rows and columns correspond to methods and datasets Musk 2 91.3 ?3.2 90.0 ?2.7 90.3 ?2.6 89.3 ?1.5 84.3 68.4 87.7 84.0 84.9 Elep 89.8 ?1.2 85.1 ?2.8 86.8 ?0.7 84.3 ?1.6 81.4 81.6 88.6 N/A 78.3 Fox 65.7 ?1.4 61.2 ?1.7 61.6 ?2.8 60.3 ?1.9 59.4 60.7 69.8 N/A 56.1 Tiger 83.8 ?1.3 81.9 ?1.5 86.0 ?1.0 84.2 ?1.0 84.0 74.7 83.0 N/A 72.1 Density (DD,[2]), EM-DD [19], MI-SVM [4], MI-Kernel [5], stMIL [8], sbMIL [8], MIGraph and miGraph [10]. When applied for multi-instance classification, our approach involves three parameters, namely, the bias/variance trade-off factor C, the kernel parameter (e.g.: ? in RBF kernel), and the bound parameter ? in the projection constraint. In the experiment, C, ?, and ? are selected from 7 {0.01,0.1,1,10,50,100}, {0.2,0.4,0.6,0.8,1.0} and {0.01,0.1,1,10,100} respectively. We employ the MOSEK toolbox 1 to solve the resulted QCQP problem (19). The other experiment uses the same parameter setting. The ten-times 10-fold cross validation results (except Diverse Density) are shown in Table 1. The results for other methods are replicated from their original papers. The results not available are marked by N/A. The bolded figure indicates that result is better than all other methods. Table 1 shows that the performance of our approach (PC-SVM) is competitive. Recall that the difference between our approach and MI-SVM is just the projection constraint. Therefore, as discussed in section 3.2, the results in Table 1 demonstrates that the strength of nonwitnesses is well utilized via the projection constraint. 4.2 Classification: COREL Image Data Sets Table 2: Test Accuracy(%) On COREL: Rows and columns correspond to methods and datasets respectively. Algorithm 1000-Image 2000-Image PC-SVM 85.6 : [84.3, 86.9] 75.8 : [74.4, 77.2] reg-SVM MIGraph miGraph MI-Kernel MI-SVM DD-SVM 84.4 : [83.0, 85.8] 83.9 : [81.2, 85.7] 82.4 : [80.2, 82.6] 81.8 : [80.1, 83.6] 74.7 : [74.1, 75.3] 81.5 : [78.5, 84.5] N/A 72.1 : [71.0, 73.2] 70.5 : [68.7, 72.3] 72.0 : [71.2, 72.8] 54.6 : [53.1, 56.1] 67.5 : [66.1, 68.9] COREL is a collection of natural scene images which have been categorized according to the presence of certain objects. Each image is regarded as a bag, and the nine dimensional ROIs (Region Of Interests) in it are regarded as its constituent instances. In experiments, we use the 1000-Image data set and the 2000-Image data set which contain ten and twenty categorizes, respectively. Following the methodology in [10], on both of the two data sets the related methods are compared by their five times 2-fold cross validation results. The algorithm for comparison include Diverse Density (DD), MI-SVM, MIGraph, miGraph , MI-Kernel and reg-SVM. In the last four algorithms one-against-all strategy is employed to tackle this multi-class task. In our approach this strategy is also used. Table 2 shows the overall accuracy as well as the 95% interval. As in benchmark data sets, our approach is competitive with the latest methods. The results again suggest that fully utilizing the nonwitnesses is important for multi-instance classification. 5 Conclusion We design a projection constraint to fully exploit nonwitnesses to avoid false positive. Since our approach is basically MI-SVM with projection constraints, the improved results on real world data sets validate the strength of nonwitnesses. We will introduce the universal projection constraint into other existing approaches for multi-instance learning, and related learning tasks, such as multiinstance regression, multi-label multi-instance learning, generalized multi-instance learning, etc. Acknowledgments We gratefully acknowledge reviewers for their insightful remarks and editors for their assiduous work. We also deeply appreciate Kuijun Ma?s careful proof-reading. Finally, we are extremely thankful to Runing Liu for the fascinating illustrations. This work was partially supported by National Basic Research Program of China under Grant No.2004CB318103 and National Natural Science Foundation of China under award No.60835002 and 60975040. 1 http://www.mosek.com/ 8 References [1] T. G. Dietterich, R. H. Lathrop, and T. Lozano-P?erez. Solving the multiple-instance problem with axisparallel rectangles. Artificial Intelligence, 89(1-2):31?71, 1997. [2] O. Maron and T. Lozano-P?erez. A framework for multiple-instance learning. Advances in neural information processing systems, pages 570?576, 1998. [3] J. Wang and J.D. Zucker. Solving the multiple-instance problem: A lazy learning approach. In Proceedings of the Seventeenth International Conference on Machine Learning, pages 1119?1126. Citeseer, 2000. [4] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning. Advances in neural information processing systems, pages 577?584, 2003. [5] T. G?artner, P.A. Flach, A. Kowalczyk, and A.J. Smola. Multi-instance kernels. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 179?186. Citeseer, 2002. [6] P.M. Cheung and J.T. Kwok. A regularization framework for multiple-instance learning. In Proceedings of the 23rd international conference on Machine learning, page 200. ACM, 2006. [7] Z.H. Zhou and J.M. Xu. On the relation between multi-instance learning and semi-supervised learning. In Proceedings of the 24th international conference on Machine learning, page 1174. ACM, 2007. [8] R.C. Bunescu and R.J. Mooney. Multiple instance learning for sparse positive bags. In Proceedings of the 24th international conference on Machine learning, page 112. ACM, 2007. [9] H.Y. Wang, Q. Yang, and H. Zha. Adaptive p-posterior mixture-model kernels for multiple instance learning. In Proceedings of the 25th international conference on Machine learning, pages 1136?1143. ACM, 2008. [10] Z. H. Zhou, Y. Y. Sun, and Yu. F. Li. Multi-instance learning by treating instances as non-I.I.D. samples. In L?eon Bottou and Michael Littman, editors, Proceedings of the 26th International Conference on Machine Learning, pages 1249?1256, Montreal, June 2009. test, Omnipress. [11] Y. Chen and J.Z. Wang. Image categorization by learning and reasoning with regions. The Journal of Machine Learning Research, 5:913?939, 2004. [12] B. Settles, M. Craven, and S. Ray. Multiple-instance active learning. Advances in Neural Information Processing Systems (NIPS), 20:1289?1296, 2008. [13] G. Fung, M. Dundar, B. Krishnapuram, and R.B. Rao. Multiple instance learning for computer aided diagnosis. In NIPS2007, page 425. The MIT Press, 2007. [14] A.J. Smola, SVN Vishwanathan, and T. Hofmann. Kernel methods for missing variables. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics. Citeseer, 2005. [15] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern classification. John Wiley & Sons, 2001. [16] B. Sch?olkopf and A.J. Smola. Learning with kernels. Citeseer, 2002. [17] Q. Tao, D.J. Chu, and J. Wang. Recursive support vector machines for dimensionality reduction. IEEE Transactions on Neural Networks, 19(1):189?193, 2008. [18] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Pr, 2004. [19] Q. Zhang and S.A. Goldman. Em-dd: An improved multiple-instance learning technique. Advances in neural information processing systems, 2:1073?1080, 2002. 9
3941 |@word version:1 duda:1 seems:2 flach:1 norm:1 seek:1 tried:1 citeseer:4 reduction:1 liu:1 contains:4 exclusively:1 document:2 rkhs:4 existing:1 current:3 com:1 si:2 assigning:1 chu:1 readily:1 john:1 subsequent:1 hofmann:2 treating:1 discrimination:2 grass:1 half:1 discovering:1 selected:2 intelligence:2 plane:4 successive:1 firstly:2 zhang:1 five:1 along:4 become:1 consists:2 artner:1 ray:1 inside:3 introduce:1 falsely:1 indeed:1 embody:1 multi:24 oversimplified:1 goldman:1 totally:2 classifies:1 notation:3 kind:1 nj:3 sky:1 every:1 concave:5 tackle:3 exactly:1 demonstrates:1 grant:1 positive:68 before:1 local:2 treat:2 tends:2 approximately:1 abuse:1 therein:1 china:3 bi:13 averaged:1 seventeenth:1 unique:2 responsible:1 acknowledgment:1 testing:1 recursive:1 definite:1 procedure:6 area:1 universal:1 drug:6 empirical:1 projection:21 alleviated:1 boyd:1 suggest:1 krishnapuram:1 get:2 cannot:1 unlabeled:1 undesirable:1 interior:1 tsochantaridis:1 greediness:1 www:1 conventional:1 equivalent:1 reviewer:1 missing:1 maximizing:1 latest:1 starting:1 convex:19 utilizing:2 regarded:3 vandenberghe:1 notion:1 target:1 exact:1 us:1 origin:1 element:2 approximated:1 utilized:3 labeled:5 bottom:1 cloud:2 ft:1 wang:6 solved:2 region:4 sun:1 trade:1 deeply:1 convexity:1 littman:1 solving:2 algebra:1 incur:1 learner:8 mislabeled:2 represented:1 train:1 univ:1 effective:1 axisparallel:1 artificial:2 whose:1 emerged:1 larger:2 solve:5 valued:1 nineteenth:1 otherwise:3 elephant:2 ability:2 statistic:1 gi:6 unseen:3 differentiate:2 sequence:2 differentiable:1 propose:1 remainder:1 achieve:1 academy:1 validate:1 olkopf:1 constituent:5 convergence:1 optimum:3 categorization:4 converges:2 object:1 help:2 thankful:1 develop:1 ac:1 andrew:1 montreal:1 ij:25 conformation:4 skip:1 come:1 involves:1 convention:2 direction:1 exploration:1 settle:1 generalization:2 preliminary:2 secondly:1 adjusted:1 yij:3 roi:4 normal:4 mapping:2 predict:4 substituting:2 adopt:1 bag:52 label:9 mit:1 sight:1 aim:3 modified:1 avoid:5 zhou:2 mil:1 earliest:1 focus:2 june:1 waterfall:4 properly:1 indicates:1 mainly:1 flaw:2 biochemical:2 nn:1 eliminate:1 relation:1 misclassified:2 tao:3 overall:3 classification:11 aforementioned:1 x11:1 musk:6 constrained:6 equal:2 once:2 categorizes:1 yu:1 unsupervised:1 representer:1 mosek:2 nonsmooth:1 few:1 employ:1 simultaneously:1 resulted:6 national:2 individual:1 qing:2 n1:1 attempt:3 misclassify:4 interest:3 centralized:2 introduces:1 mixture:1 pc:3 beforehand:1 encourage:1 necessary:1 conj:1 unless:1 fox:3 incomplete:1 taylor:4 euclidean:1 instance:97 classify:3 column:3 rao:1 measuring:1 introducing:1 varies:1 density:4 fundamental:1 international:8 xi1:1 off:1 michael:1 together:1 na:2 again:1 containing:1 return:1 li:7 potential:1 automation:1 includes:1 depends:1 view:2 competitive:2 zha:1 complicated:1 contribution:1 oi:6 ni:10 formed:2 accuracy:3 variance:5 characteristic:2 efficiently:1 bolded:1 correspond:2 xm1:1 basically:1 researcher:1 mooney:1 classified:1 suffers:1 definition:2 against:1 failure:1 naturally:1 associated:1 mi:27 proof:1 proved:1 recall:1 color:1 dimensionality:1 organized:1 hilbert:1 cj:3 supervised:3 methodology:1 improved:3 formulation:3 evaluated:1 generality:1 labyrinth:1 just:4 smola:3 until:2 replacing:1 maron:1 runing:1 semisupervised:1 name:1 dietterich:2 effect:1 normalized:1 contain:1 former:2 hence:2 lozano:2 chemical:1 regularization:1 assigned:1 please:3 generalized:1 stone:1 mist:3 demonstrate:1 bring:1 passage:1 omnipress:1 factually:2 reasoning:1 image:20 fi:5 common:1 ji:3 corel:3 stork:1 discussed:1 refer:2 cambridge:1 rd:1 erez:2 gratefully:1 han:2 f0:3 zucker:1 etc:4 posterior:1 irrelevant:2 indispensable:1 certain:4 inequality:1 yi:2 greater:1 promoted:1 employed:1 converge:1 maximize:3 migraph:8 ii:30 semi:2 multiple:10 reduces:1 segmented:1 characterized:1 offer:1 cross:2 hart:1 cccp:7 post:1 award:1 prediction:3 regression:2 basic:1 iteration:4 kernel:20 addition:1 background:1 want:2 addressed:1 interval:1 sch:1 meaningless:1 induced:1 tend:1 n1i:1 dundar:1 integer:2 extracting:1 structural:1 presence:1 yang:1 concerned:3 affect:1 opposite:3 cn:1 svn:1 whether:1 pca:2 suffer:1 yanjun:2 cause:1 nine:1 remark:1 generally:2 involve:1 bunescu:1 ten:2 svms:1 category:3 tth:2 http:1 exist:1 xij:35 correctly:1 diverse:4 diagnosis:2 misvm:1 group:1 key:1 four:1 threshold:2 nevertheless:2 drawn:2 changing:1 lti:4 tenth:1 utilize:3 rectangle:1 wasted:1 subgradient:2 sum:1 beijing:1 place:3 arrive:2 reasonable:1 l2i:7 patch:4 draw:1 ki:4 bound:2 nips2007:1 jue:2 fold:2 quadratic:1 fascinating:1 activity:1 multiinstance:1 strength:2 constraint:31 vishwanathan:1 scene:3 qcqp:2 min:9 extremely:1 subgradients:1 fung:1 according:7 craven:1 smaller:1 em:3 son:1 pr:1 remains:1 describing:1 slack:3 mechanism:2 end:1 available:1 rewritten:1 apply:3 kwok:1 away:2 differentiated:1 appropriate:2 kowalczyk:1 rp:1 original:2 top:1 include:3 straints:1 neglect:1 exploit:1 eon:1 chinese:1 unchanged:1 appreciate:1 objective:5 g0:4 realized:1 strategy:2 traditional:2 gradient:1 distance:4 mapped:1 separating:9 reason:1 besides:2 useless:1 index:3 illustration:3 ratio:1 minimizing:1 trace:1 negative:30 design:1 unknown:2 twenty:1 datasets:4 benchmark:4 acknowledge:1 witness:6 extended:1 rn:1 reproducing:1 introduced:2 namely:1 specified:1 toolbox:1 optimized:2 quadratically:1 nip:1 impair:1 usually:3 below:3 pattern:2 reading:1 program:2 max:16 shifting:1 ia:1 misclassification:3 treated:2 natural:4 improve:1 carried:2 geometric:2 l2:1 determining:1 loss:1 fully:3 squeezed:1 validation:2 foundation:1 kti:16 dd:7 editor:2 surrounded:1 row:5 changed:1 supported:1 last:1 keeping:1 side:3 bias:1 institute:1 characterizing:1 sparse:1 boundary:2 world:4 collection:1 adaptive:1 replicated:1 far:6 transaction:1 citation:1 active:1 table:6 molecule:2 expansion:4 bottou:1 substituted:1 apr:1 bounding:1 repeated:1 categorized:1 xu:1 ff:1 elaborate:1 wiley:1 position:1 third:3 theorem:1 insightful:1 symbol:1 svm:29 consequent:1 workshop:1 false:12 adding:2 effectively:1 importance:1 ci:2 margin:6 nk:3 chen:1 lazy:1 ordered:1 partially:1 binding:2 corresponds:5 minimizer:1 acm:4 ma:1 marked:1 cheung:1 consequently:1 rbf:1 careful:1 change:1 aided:2 tiger:3 except:1 hyperplane:5 wt:4 cirrus:2 principal:3 lathrop:1 attempted:1 support:4 latter:1 phenomenon:1 reg:4 avoiding:2
3,248
3,942
Computing Marginal Distributions over Continuous Markov Networks for Statistical Relational Learning Matthias Br?ocheler, Lise Getoor University of Maryland, College Park College Park, MD 20742 {matthias, getoor}@cs.umd.edu Abstract Continuous Markov random fields are a general formalism to model joint probability distributions over events with continuous outcomes. We prove that marginal computation for constrained continuous MRFs is #P-hard in general and present a polynomial-time approximation scheme under mild assumptions on the structure of the random field. Moreover, we introduce a sampling algorithm to compute marginal distributions and develop novel techniques to increase its efficiency. Continuous MRFs are a general purpose probabilistic modeling tool and we demonstrate how they can be applied to statistical relational learning. On the problem of collective classification, we evaluate our algorithm and show that the standard deviation of marginals serves as a useful measure of confidence. 1 Introduction Continuous Markov random fields are a general and expressive formalism to model complex probability distributions over multiple continuous random variables. Potential functions, which map the values of sets (cliques) of random variables to real numbers, capture the dependencies between variables and induce a exponential family density function as follows: Given a finite set of n random variables X = {X1 , . . . , Xn } with an associated bounded interval domain Di ? R, let ? = {?1 , . . . , ?m } be a finite set of m continuous potential functions defined over the interval domains, i.e. ?j : D ? [0, M ], for some bound M ? R+ , where D = D1 ? D2 . . . ? Dn . For a set of free parameters ? = {?1 , . . . , ?m }, we then define the probability measure P over X with respect to ? through its density function f as: ? ? Z m m X X 1 f (x) = exp[? ?j ?j (x)] ; Z(?) = exp ?? ?j ?j (x)? dx (1) Z(?) D j=1 j=1 where Z is the normalization constant. The definition is analogous to the popular discrete Markov random fields (MRF) but using integration over the bounded domain rather than summation for the partition function Z. In addition, we assume the existence of a set of kA equality and kB inequality constraints on the random variables, that is, A(x) = a, where A : D ? RkA , a ? RkA and B(x) ? b, where B : D ? RkB , b ? RkB . Both equality and inequality constraints restrict the possible combinations of values the random variables X can assume. That is, we set f (x) = 0 whenever any of the ? for the normalization constraints are violated and constrain the domain of integration, denoted D, constant correspondingly. Constraints are useful in probabilistic modeling to exclude inconsistent outcomes based on prior knowledge about the distribution. We call this class of MRFs constrained continuous Markov random fields (CCMRF). Probabilistic inference often requires the computation of marginal distributions for all or a subset of the random variables X. Marginal computation for discrete MRFs has been extensively studied due to its wide applicability in probabilistic reasoning. In this work, we study the theoretical and practical aspects of computing marginal density functions over CCMRFs. General continuous MRFs can 1 ? class(A, C) ? ?1 : A.text ? ? class(B, C) = B.text ? ? class(A, C) ? ?2 : link(A, B) ? ? class(B, C) Constraint : f unctional(class) Table 1: Example PSL program for collective classification. be used in a variety of probabilistic modeling scenarios and have been studied for applications with continuous domains such as computer vision. Gaussian Random Fields are a type of continuous MRF which assume normality. In this work, we make no restrictive assumptions about the marginal distributions other than boundedness. For general continuous MRFs, non-parametric belief propagation (NBP) [1] has been proposed as a method to estimate marginals. NBP represents the ?belief? as a combination of kernel densities which are propagated according to the structure of the MRF. In contrast to NBP, our approach provides polynomial-time approximation guarantees and avoids the representational choice of kernel densities. The main contributions of this work are described in Section 3. We begin by showing that computing marginals in CCMRFs is #P-hard in the number of random variables n. We then discuss a Markov chain Monte Carlo (MCMC) sampling scheme that can approximate the exact distribution to within  error in polynomial time under the general assumption that the potential functions and inequality constraints are convex. Based on this result, we propose a tractable sampling algorithm and present a novel approach to increasing its effectiveness by detecting and counteracting slow convergence. Our theoretical results are based on recent advances in computational geometry and the study of logconcave functions [2]. In Section 4, we investigate the performance, scalability, and convergence of the sampling algorithm on the probabilistic inference problem of collective classification on a set of Wikipedia documents. In particular, we show that the standard deviation of the marginal density function can serve as a strong indicator for the ?confidence? in the classification prediction, thereby demonstrating a useful qualitative aspect of marginals over continuous MRFs. Before turning to the main contributions of the paper, in the next section, we give background motivation for the form of CCMRFs we study. 2 Motivation Our treatment of CCMRFs is motivated by probabilistic similarity logic (PSL) [3]. PSL is a relational language that provides support for probabilistic reasoning about similarities. PSL is similar to existing SRL models, e.g., MLNs [4], BLPs [5], RMNs [6], in that it defines a probabilistic graphical model over the properties and relations of the entities in a domain as a grounding of a set of rules that have attached parameters. However, PSL supports reasoning about ?soft? truth values, which can be seen as similarities between entities or sets of entities, degrees of belief, or strength of relationships. PSL uses annotated logic rules to capture the dependency structure of the domain, based on which it builds a joint continuous probabilistic model over all decision atoms which can be expressed as a CCMRF as defined above. PSL has been used to reason about the similarity between concepts from different ontologies as well as articles from Wikipedia. Table 1 shows a simple PSL program for collective classification. The first rule states that documents with similar text are likely to have the same class. The second rule says that two documents which are linked to each other are also likely to be assigned the same class. Finally, we express the constraint that each document can have at most one class, that is, the class predicate is functional and can only map to one value. Such domain specific constraints motivate our introduction of equality and inequality constraints for CCMRFs. Rules and constraints are written in first order logic formalism and are grounded out against the observed data such that each ground rule constitutes one potential function or constraint computing the truth value of the formula. Rules have an associated weight ?i which is used as the parameter for each associated potential function. The weights can be learned from training data. In the following, we make some assumptions about the nature of the constraints and the potential functions motivated by the requirements of the PSL framework and the types of CCMRFs modeled therein. Firstly, we assume all domains are in the [0, 1] interval which corresponds to the domain of similarity truth values in PSL. Secondly, all constraints are assumed to be linear. Thirdly, the potential functions ?j are of the form ?j (x) = max(0, oj ?x+qj ) where oTj ? Rn is a n-dimensional row vector and qj ? R. The particular form of the potential functions is motivated by the way similarity truth values are combined in PSL using t-norms (see [3] for details). While the techniques presented in this work are not specific to PSL, a brief outline of the PSL framework helps in understanding the assumptions about the CCMRFs of interest made in our algorithm and experiments. In Section 3.5 we show how our assumptions can be relaxed while maintaining polynomial-time guarantees for applications outside the PSL framework. 2 1 X1 X1 1 xMAP p P(0.4 ? X2 ? 0.6) p? X3 0 1 p? X2 a) Example of geometric marginal computation 3 p 0 1 X3 b) Hit-and-Run and random ball walk illustration Computing continuous marginals This section contains the main technical contributions of this paper. We start our study of marginal computation for CCMRFs by proving that computing the exact density function is #P hard (3.1). In Section 3.2, we discuss how to approximate the marginal distribution using a MCMC sampling scheme which produces a guaranteed -approximation in polynomial time under suitable conditions. We show how to improve the sampling scheme by detecting phases of slow convergence and present a technique to counteract them (3.3). Finally, we describe an algorithm based on the sampling scheme and its improvements (3.4). In addition, we discuss how to relax the linearity conditions in Section 3.5. Throughout this discussion we use the following simple example for illustration: Example 1 Let X = {X1 , X2 , X3 } be subject to the inequality constraint x1 + x3 ? 1. Let ?1 (x) = x1 , ?2 (x) = max(0, x1 ? x2 ), ?3 (x) = max(0, x2 ? x3 ) where ? = (1, 2, 1) are the associated free parameters. 3.1 Exact marginal computation Theorem 1R Computing the marginal probability density function fX0 (x0 ) = y??D? i ,s.t.Xi ?X f (x0 , y)dy for a subset X0 ? X under a probability measure P defined / 0 by a CCMRF is #P hard in the worst case. We prove this statement by a simple reduction from the problem of computing the volume of a n-dimensional polytope defined by linear inequality constraints. To see the relationship to computational geometry, note that the domain D is a n-dimensional unit hypercube1 . Each linear inequality constraints Bi from the system B can be represented by a hyperplane which ?cuts off? part of the hypercube D. Finally, the potential functions induce a probability distribution over the resulting convex polytope. Figure 3a) visualizes the domain for our running example in the 3-dimensional Euclidean space. The constraint domain is shown as a wedge. The highlighted area marks the region of probability mass that is equal to the probability P(0.4 ? X2 ? 0.6). Proof 1 (Sketch) For any random variable X ? X, the marginal probability P(l ? X ? u) under the uniform probability distribution defined by a single potential function ? = 0 corresponds to the volume of the ?slice? defined by the bounds u < l ? [0, 1] relative to the volume of the entire polytope. In [7] it was shown that computing the volume of such slices is at least as hard as computing the volume of the entire polytope which is known to be #P-hard [8]. 3.2 Approximate marginal computation and sampling scheme Despite this hardness result, efficient approximation algorithms for convex volume computation based on MCMC techniques have been devised and yield polynomial-time approximation guarantees. We will review the techniques and then relate them to our problem of marginal computation. The first provably polynomial-time approximation algorithm for volume computation was based on ?random ball-walks?. Starting from some initial point p inside the polytope, one samples from the local density function of f restricted to the inside of a ball of radius r around the point p. If the newly sampled point p0 lies inside the polytope, we move to p0 , otherwise we stay at p and repeat the sampling. If P is the uniform distribution (as typically chosen for volume computation), the 1 We ignore equality constraints for now until the discussion of the algorithm in Section 3.4 3 resulting Markov chain converges to P over the polytope in O? (n3 ) steps assuming that the starting distribution is not ?too far? from P [9].2 More recently, the hit and run sampling scheme [10] was rediscovered which has the advantage that no strong assumptions about the initial distribution need to be made. As in the random ball walk, we start at some interior point p. Next, we generate a direction d (i.e., n dimensional vector of length 1) uniformly at random and compute the line segment l of the line p + ?d that resides inside the polytope. We then compute the distribution of P over the segment l, sample from it uniformly at random and move to the new sample point p0 to repeat the process. For P being the uniform distribution, the Markov chain also converges after O? (n3 ) steps but for hit-and-run we only need to assume that the starting point p does not lie on the boundary of the polytope [2]. In [7], the authors show that hit-and-run significantly outperforms random ball walk sampling in practice, because it (1) does not get easily stuck in corners since each sample is guaranteed to be drawn from inside the polytope, (2) does not require parameter setting like the radius r which greatly influences the performance of random ball walk. Figure 3 b) shows an iteration of the random ball walk and the hit-and-run sampling schemes for our running example restricted to just two dimensions to simplify the presentation. We can see that, depending on the radius of the ball, a significant portion may not intersect with the feasible region. Lov?asz and Vempala[2] have proven a stronger result which shows that hit-and-run sampling converges for general log-linear distributions. Based on their result, we get a polynomial-time approximation guarantee for distributions induced by CCMRFs as defined above. Theorem 2 The complexity of computing an approximate distribution ? ? using the hit-andrun sampling scheme such that the total variation distance of ? ? and P is less than  is O? n ? 3 (kB + n ? + m) , where n ? = n ? kA , under the assumptions that we start from an initial distribution ? such that the density function d?/dP is bounded by M except on a set S with ?(S) ? /2. ? is an n Proof 2 (Sketch) Since A, B are linear, D ? = n ? kA dimensional convex polytope after dimensionality reduction through A. By definition, f is from the exponential family and since all factors are linear or maximums of linear functions, f is a log concave function (maximums and sums of convex functions are convex). More specifically, f is a log concave and log piecewise linear function. Let ? s be the distribution of the current point after s steps of hit-and-run have been applied 2 2 5 M nR to f starting from ?. Now, according to Theorem 1.3 from [2], for s > 1030 n rR 2 ln r the total s variation distance of ? and P is less than , where r is such that the level set of f of probability 81 contains a ball of radius r and R2 ? Ef (|x ? zf |2 ), where zf is the centroid of f . Now, each hit-and-run step requires us to iterate over the random variable domain boundaries, O(? n), compute intersections with the inequality constraints, O(? nkB ), and integrate over the line segment involving all factors, O(? nm). 3.3 Improved sampling scheme Our proposed sampling algorithm is an implementation of the hit-and-run MCMC scheme. However, the theoretical treatment presented above leaves two questions unaddressed: 1) How do we get the initial distribution ?? 2) The hit-and-run algorithm assumes that all sample points are strictly inside the polytope and bounded away from its boundary. How can we get out of corners if we do get stuck? The theorem above assumes a suitable initial distribution ?, however, in practice, no such distribution is given. Lov?asz and Vempala also show that the hit-and-run scheme converges from a single starting point on uniform distributions under the condition that it does not lie on the boundary and at the expense of an additional factor of n in the number of steps to be taken (compare Theorem 1.1 and Corollary 1.2 in [2]). We follow this approach and use a MAP state xM AP of the distribution P as the single starting point for the sampling algorithm. Choosing a MAP state as the starting point has two advantages: 1) we are guaranteed that xM AP is an interior point and 2) it is the point with the highest probability density and therefore highest probability mass in a small local neighborhood. However, starting from a MAP state elevates the importance of the second question, since the MAP state often lies exactly on the boundary of the polytope and therefore we are likely to start the sampling algorithm from a vertex of the polytope. The problem with corner points p is that most of 2 The O? notation ignores logarithmic and factors and dependence on other parameters like error bounds. 4 the directions sampled uniformly at random will lead to line segments of zero length and hence we do not move between iterations. Let W be the subset of inequality constraints B that are ?active? at the corner point p and b the corresponding entries in b, i.e. W p = b (since all constraints are linear, we abuse notation and consider B, W to be matrices). In other words, the hyperplanes corresponding to the constraints in W intersect in p. Now, for all directions d ? Rn such that there exist active constraints Wi , Wj with Wi d < 0 and Wj d > 0, the line segment through p induced by d must necessarily be 0. It also follows that more active constraints increase the likelihood of getting stuck in a corner. For example, in Figure 3 b) the point xM AP in the upper left hand corner denotes the MAP state of the distribution defined in our running example. If we generate a direction uniformly at random, only 1/4 of those will be feasible, that is, for all other we won?t be able to move away from xM AP . To avoid the problem of repeatedly sampling infeasible directions at corner points, we propose to restrict the sampling of directions to feasible directions only when we determine that a corner point has been reached. We define a corner point p as a point inside the polytope where the number of active constraints is above some threshold ?.3 A direction d is feasible, if W d < 0. Assuming that there are a active constraints at corner point p (i.e., W has a rows) we sample each entry of the a-dimensional vector z from ?|N (0, 1)| where N (0, 1) is the standard Gaussian distribution with zero mean and unit variance. Now, we try to find directions d such that W d ? z. A number of algorithms have been proposed to solve such systems of linear inequalities for feasible points d. In our sampling algorithm we implement the relaxation method introduced by Agmon [11] and Motzkin and Schoenberg [12] due to its simplicity. The relaxation method proceeds as follows: We start with d0 = 0. At each iteration we check if W di ? z ; if so, we have found a solution and terminate. If not, we choose the most ?violated? inequality constraint Wk from W , i.e., the row k di ?zk vector Wk from W which maximizes WkW , and update the direction, kk di+1 = di + 2 zk ? Wk di T Wk kWk k2 The relaxation method is guaranteed to terminate, since a feasible direction d always exists [12]. 3.4 Sampling algorithm 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Algorithm CCMRF Sampling Input: CCMRF specified by RVs X with domains D = [0, 1]n , equality constraints A(x) = a inequality constraints B(x) ? b, potential functions ?, parameters ? Output: Marginal probability density histograms H[Xi ] : [0, 1] ? R+ , ?Xi ? X if A = ? 27 for i = 1 . . . |rows(B)| P ? 1|X| 28 if cdi 6= 0 ?cxi n0 ? n 29 a = bi cd i else 30 if cdi > 0 then ?high ? min(?high , a) r ? rank(A) 31 if cdi < 0 then ?low ? max(?low , a) [U, ?, V ] ? svd(A) if a = 0 then active ? active ? {i} 32 P ? V |columns: [r+1,n] 33 if ?high ? ?low = 0 ? |active| > ? n0 ? n ? r 34 cornered ? TRUE x0 ? MAP(A(x) = a, B(x) ? b, ?) continue 35 36 M ? map : [0, 1] ? R ? R cornered ? FALSE 37 for ?i = max(0, oi ? x + qi ) ? ? for j = 0 to ? 38 r ? ?i (oi ? d) if cornered 39 c ? ?i (oi ? xj + qi ) d ? ~0 a ? ?c/r 40 W ? B|rows:active ? P 41 if r > 0 ? a < ?high z ? zi = ? ?|N (0, 1)| ?i = 1 . . . n0 42 M (max(a, ?low )) ? M (max(a, ?low )) + [r, c] while ?i : Wk d ? zk > 0 43 else if r < 0 ? a > ?low k d?zk 44 M (?low ) ? M (?low ) + [r, c] v ? argmaxk WkW kk if a < ?high then M (a) ? M (a) + [?r, ?c] 45 T v ?Wv d d = d + 2 zkW W 2 v vk 46 else M (?P low ) ? M (?low ) + [0, c] cornered ? FALSE 47 [r? , c? ] ? a?? M (a) P else 48 ?? ? a<b<??6?c:a<c<b r1a e?ca (e?ra a ? e?ra b ) d ? di = ? N (0, 1) ?i = 1 . . . n0 1 49 s ? ? [0, ??high ] d ? kdk d 50 a ? max{? ? M | ?? ? s} d?P ?d ?ca ?ra a 51 ? ? ?1 ) + ca ) ra (log (?sra + ra ?a + e active ? ? j+1 j 52 x ? x + ?d ?low ? ??, ?high ? ? ? 53 if j > 100 n03 cd ? B ? d ; cx ? B ? xj H[i][xj+1 ] ? H[i][xj+1 ] + 1 ?i = 1 . . . n 54 i i Figure 1: Constrained continuous MRF sampling algorithm 3 We used ? = 2 in our experiments. 5 Putting the pieces together, we present the marginal distribution sampling algorithm in Figure 1. The inputs to the algorithm were discussed in Section 1. In addition, we assume that the domain restrictions Di = [l, u] for the random variables Xi are encoded as pairs of linear inequality constraints l ? xi ? u in B, b. The algorithm first analyzes the equality constraints A to determine the number of ?free? random variables and reduce the dimensionality accordingly. The singularvalue decomposition of A is used to determine the n ? n0 projection matrix P which maps from the null-space of A to the original space D, where n0 = n ? rank(A) is the dimensionality of the null-space. If no equality constraints have been specified, P is the n-dimensional unit matrix. Next, the algorithm determines a MAP state x0 of the density function defined by the CCMRF, which is the point with the highest probability mass, that is, x0 = argmaxx?D ? f (x). Since the Z(?) is Pm constant and the logarithm monotonic, this is identical to x0 = argminx?D ? j=1 ?i ?i (x). Hence, computing a MAP state can be cast as a linear optimization problem, since all constraints are linear and the potential functions maximums of two linear functions. Linear optimization problems can be solved efficiently in time O(n3.5 ) and are very fast in practice. After determining the null-space and starting point, we begin collecting ? samples. If we detected being stuck in a corner during the previous iteration, we sample a direction d from the feasible subspace of all possible directions in the reduced null-space using the adapted relaxation method described above (lines 13-19). Otherwise, we sample a direction uniformly at random from the null-space of A. We then normalize the direction and project it back into our original domain D by matrix multiplication with P . The projection ensures that all equality constraints remain satisfied as we move along the direction d. Next, we compute the segment of the line l : xj + ?d inside the polytope defined by the inequality constraints B (lines 25-32). Iterating over all inequality constraints, we determine the value of ? where l intersects the constraint i. We keep track of the largest negative and smallest positive values to define the bounds [?low , ?high ] such that the line segment is defined exactly by those values of ? inside this interval. In addition, we determine all active constraints, i.e. those constraints where the current sample point xj is the point of intersection and hence ? = 0. If the interval [?low , ?high ] is 0, then we are currently sitting in a corner. If, in addition, the number of active constraints exceed some threshold ? we are stuck in a corner and abort the current iteration to start over with restricted direction sampling. In lines 36-48 we compute the cumulative density function of the probability P over the line segment l with P ? ? [?low , ?high ]. Based on our assumption in Section 2, the sum of potential functions m S = i=1 ?i ?i restricted to the line l is a continuous piece-wise linear function. In order to integrate the density function, we need to segment S into its differentiable parts, so we start by determining the subintervals of [?low , ?high ] where S is linear and differentiable and can therefore be described by S = rx + c. We compute the slope r and y-intercept c for each potential function individually as well as the point of undifferentiability a where the line crosses 0. We use a map M to store the line description [r, c] with the point of intersection a (lines 36-46). Then, we compute the aggregate slope ra and y-intercept ca for the sum of all potentials for each point of undifferentiability a (line 47) and use this information to compute the unnormalized cumulative density function by integrating over each subinterval and summing those up in ?? (line 48). Now, ?a /??high gives the cumulative probability mass for all points of undifferentiability a which define the subintervals. Next, we sample a number s from the interval [0, ??high ] uniformly at random (line 49) and compute ? such that ?? = s (line 50-51). Finally, we move to the new sample point xj+1 = xj + ?d and add it to the histogram which approximates the marginal densities if the number of steps taken so far exceeds the burn-in period which we configured to be 1% of the total number of steps. 3.5 Generalizing to convex continuous MRFs In our treatment so far, we made specific assumptions about the constraints and potential functions. More generally, Theorem 2 holds when the inequality constraints as well as the potential functions are convex. A system of inequality constraints is convex if the set of all points that satisfy the constraints is convex, that is, any line connecting two points in the set is completely contained in the set. Our algorithm needs to be modified where we currently assume linearity. Firstly, computing a MAP state requires general convex optimization. Secondly, our method for finding feasible directions when being caught in a corner of the polytope needs to be adapted to the case of arbitrary convex constraints. One simple approach is to use the tangent hyperplane at the point xj as an approximation to the actual constraint and proceed as is. Similarly, we need to modify the computation of intersection points between the line and the convex constraints as well as how we determine the 6 points of undifferentiability. Lastly, the computation of integrals over subintervals for the potential functions requires knowledge of the form of potential functions to be solved analytically or they need to be approximated efficiently. The algorithm can handle arbitrary domains for the random variables as long as they are connected subintervals of R. 4 Experiments This section presents an empirical evaluation of the proposed sampling algorithm on the problem of category prediction for Wikipedia documents based on similarity. After describing the data and the experimental methodology, we demonstrate that the computed marginal distributions effectively predict document categories. Moreover, we show that analysis of the marginal distribution provides an indicator for the confidence in those predictions. Finally, we investigate the convergence rate and runtime performance of the algorithm in detail. For our evaluation dataset, we collected all Wikipedia articles that appeared in the featured list4 for a two week period in Oct. 2009, thus obtaining 2460 documents. Of these, we considered a subset of 1717 documents assigned to the 7 most popular categories. After stemming and stop-word removal, we represented the text of each document as a tf/idf-weighted word vector. To measure the similarity between documents, we used the popular cosine metric on the weighted word vectors. The data contains the relations Link(fromDoc, toDoc), which establishes a hyperlink between two documents. We used K-fold cross-validation for k = 20, 25, 30, 35 by splitting the dataset into K non-overlapping subsets each of which is determined using snowball sampling over the link structure from a randomly chosen initial document. For each training and test data subset, we randomly designate 20% of the documents as ?seed documents? of which the category is observed and the goal is to predict the categories of the remaining documents. All experiments were executed on identical hardware powered by two Intel Xeon Quad Core 2.3 GHz Processors and 8 GB of RAM. 4.1 Classification results K 20 25 30 35 Baseline 39.5% 39.1% 36.7% 38.8% Marginals 55.8% 51.5% 51.1% 56.6% Improvement 41.4% 31.7% 39.1% 46.1% Figure 2: a) Classification Accuracy K 20 25 30 35 P(Null Hypothesis) 1.95E-09 2.40E-13 <1.00E-16 4.54E-08 Relative Difference ?(?) 38.3% 41.2% 43.5% 39.0% b) Std. deviation as an indicator for confidence The baseline method uses only the document content by propagating document categories via textual similarity measured by the cosine distance. Using rules and constraints similar to those presented in Table 1, we create a joint probabilistic model for collective classification of Wikipedia documents. We use PSL twofold in this process: Firstly, PSL constructs the CCMRF by grounding the rules and constraints against the given data as described in Section 2 and secondly, we use the perceptron weight learning method provided by PSL to learn the free parameters of the CCMRF from the training data (see [3] for more detail). The sampling algorithm takes the constructed CCMRF and learned parameters as input and computes the marginal distributions for all random variables from 3 million samples. We have one random variable to represent the similarity for each possible document-category pair, that is, one RV for each grounding of the category predicate. For each document D we pick the category C with the highest expected similarity as our prediction. The accuracy in prediction of both methods is compared in Table 2 a) over the 4 different splits of the data. We observe that the collective probabilistic model outperforms the baseline by up to 46%. All results are statistically significant at p = 0.02. While this results suggests that the sampling algorithm works in practice, it is not surprising and novel since similar results for collective classification have been produced before using other approaches in statistical relational learning (e.g., compare [13]). However, the marginal distributions we obtain provide additional information beyond the simple point estimate of its expected value. In particular, we show that the standard deviation of the marginals can serve as an indicator for the confidence in the particular classification prediction. In order to show this, we compute the standard deviation of the marginal distributions for those random variables picked during the 4 http://en.wikipedia.org/wiki/Wikipedia:Featured_lists, see [3] for more information on the dataset 7 KL ?Divergence ?by ?Sample ?Size ? Run1me ?for ?1000 ?Samples ? 35 ? 30 ? Time ?in ?sec ? KL ?Divergence ? 5 ? 0.5 ? Average ?KL ?Divergence ? 20 ? 15 ? 10 ? Lowest ?Quar8le ?KL ?Divergence ? 5 ? Highest ?Quar8le ?KL ?Divergence ? 0.05 ? 30000 ? 25 ? 0 ? 300000 ? Number ?of ?Samples ? 3000000 ? Figure 3: a) KL Divergence by sample size 0 ? 2000 ? 4000 ? 6000 ? 8000 ? Number ?of ?Poten1al ?Func1ons ? 10000 ? b) Runtime for 1000 samples prediction stage for each fold. We separate those values into two sets, S+ , S? , based on whether the prediction turned out to be correct (+) or incorrect (?) when evaluated against the ground truth. Let ?+ , ?? denote the average standard deviation for those values in S+ , S? respectively. Our hypothesis is that we have higher confidence in the correct predictions, that is, ?+ will typically be smaller than ?? . In other words, we hypothesize that the relative difference between the average ??+ , is larger than 0. Under the corresponding null hypothesis, we would deviations, ?(?) = 2 ??? + +?? expect any difference in average standard deviation, and therefore any nonzero ?(?), to be purely coincidental or noise. Assuming that such noise in the ?(?)?s, which we computed for each fold, can be approximated by a Gaussian distribution with 0 mean and unknown variance5 , we test the null hypothesis using a two tailed Z-test with the observed sample variance. The Z-test scores on the 4 differently sized splits are reported in Table 2 b) and allow us to reject the null hypothesis with very high confidence. Table 2 b) also lists ?(?) for each split averaged across the multiple folds and shows that ?? is about 40% larger than ?+ on average. 4.2 Algorithm performance In investigating the performance of the sampling algorithm we are mainly interested in two questions: 1) How many samples does it take to converge on the marginal density functions? and 2) What is the computational cost of sampling? To answer the first question, we collect independent samples of varying size from 31 thousand to 2 million and one reference sample with 3 million steps for all folds. For each of the former samples we compare the marginals thus obtained to the ones of the reference sample by measuring their KL divergence. To compute the KL divergence we discretize the density function using a histogram with 10 bins. The center line in Figure 3 a) shows the average KL divergence with respect to the sample size across all folds. To study the impact of dimensionality on convergence, we order the folds by the number of random variables n and show the average KL divergence for the lowest and highest quartile which contains 174 ? 224 and 322 ? 413 random variables respectively. The plot is drawn in log-log scale and therefore suggests that each magnitude increase in sample size yields a magnitude improvement in KL divergence. To answer the second question, Figure 3 b) displays the time needed to generate 1000 samples with respect to the number of potential functions in the CCMRF. Computing the induced probability density function along the sampled line segment dominates the cost of each sampling step and the graph shows that this cost grows linearly with the number of potential functions. 5 Conclusion We have presented a novel approximation scheme for computing marginal probabilities over constrained continuous MRFs based on recent results in computational geometry and discussed techniques to improve its efficiency. We introduced an effective sampling algorithm and verified its performance in an empirical evaluation. To our knowledge, this is the first study of the theoretical, practical, and empirical aspects of marginal computation in general constrained continuous MRFs. While our initial results are quite promising, there are still many further directions for research including improved scalability, applications to other probabilistic inference problems, and using the confidence values to improve the prediction accuracy. 5 Even if the standard deviations in S+ , S? are not normally distributed, the central limit theorem postulates that their averages will eventually follow a normal distributions under independence assumptions. 8 Acknowledgment We thank Stanley Kok, Stephan Bach, and the anonymous reviewers for their helpful comments and suggestions. This material is based upon work supported by the National Science Foundation under Grant No. 0937094. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. References [1] E. B Sudderth. Graphical models for visual object recognition and tracking. Ph.D. thesis, Massachusetts Institute of Technology, 2006. [2] L. Lovasz and S. Vempala. Hit-and-run from a corner. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 310?314, Chicago, IL, USA, 2004. ACM. [3] M. Broecheler, L. Mihalkova, and L. Getoor. Probabilistic similarity logic. In Conference on Uncertainty in Artificial Intelligence, 2010. [4] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62(1):107?136, 2006. [5] K. Kersting and L. De Raedt. Bayesian logic programs. Technical report, Albert-Ludwigs University, 2001. [6] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In Proceedings of UAI-02, 2002. [7] M. Broecheler, G. Simari, and VS. Subrahmanian. Using histograms to better answer queries to probabilistic logic programs. Logic Programming, page 4054, 2009. [8] M. E. Dyer and A. M. Frieze. On the complexity of computing the volume of a polyhedron. SIAM Journal on Computing, 17(5):967?974, October 1988. [9] R. Kannan, L. Lovasz, and M. Simonovits. Random walks and an o*(n5) volume algorithm for convex bodies. Random structures and algorithms, 11(1):150, 1997. [10] R. L. Smith. Efficient monte carlo procedures for generating points uniformly distributed over bounded regions. Operations Research, 32(6):1296?1308, 1984. [11] S. Agmon. The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6(3):382392, 1954. [12] T. S. Motzkin and I. J. Schoenberg. The relaxation method for linear inequalities. IJ Schoenberg: Selected Papers, page 75, 1988. [13] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classification in network data. AI Magazine, 29(3):93, 2008. 9
3942 |@word mild:1 nkb:1 polynomial:8 norm:1 stronger:1 d2:1 decomposition:1 p0:3 pick:1 thereby:1 boundedness:1 reduction:2 initial:7 contains:4 score:1 document:20 outperforms:2 existing:1 ka:3 current:3 surprising:1 dx:1 written:1 must:1 stemming:1 chicago:1 partition:1 hypothesize:1 plot:1 update:1 n0:6 v:1 intelligence:1 leaf:1 selected:1 accordingly:1 smith:1 core:1 provides:3 detecting:2 hyperplanes:1 firstly:3 org:1 dn:1 along:2 constructed:1 symposium:1 qualitative:1 prove:2 incorrect:1 inside:9 introduce:1 x0:7 lov:2 ra:6 expected:2 hardness:1 ontology:1 actual:1 quad:1 increasing:1 provided:1 begin:2 project:1 moreover:2 bounded:5 linearity:2 mass:4 notation:2 maximizes:1 what:1 null:9 nbp:3 coincidental:1 lowest:2 finding:2 guarantee:4 collecting:1 concave:2 runtime:2 exactly:2 k2:1 hit:13 unit:3 normally:1 grant:1 before:2 elevates:1 positive:1 local:2 modify:1 limit:1 despite:1 ap:4 abuse:1 burn:1 therein:1 studied:2 suggests:2 collect:1 bi:2 statistically:1 averaged:1 practical:2 acknowledgment:1 thirty:1 practice:4 implement:1 x3:5 procedure:1 area:1 intersect:2 empirical:3 featured:1 significantly:1 reject:1 projection:2 confidence:8 induce:2 word:5 integrating:1 get:5 interior:2 influence:1 intercept:2 restriction:1 map:14 reviewer:1 center:1 starting:9 caught:1 convex:14 simplicity:1 splitting:1 rule:9 proving:1 handle:1 variation:2 schoenberg:3 analogous:1 magazine:1 exact:3 programming:1 us:2 hypothesis:5 domingo:1 approximated:2 recognition:1 std:1 cut:1 observed:3 taskar:1 solved:2 capture:2 worst:1 thousand:1 region:3 wj:2 ensures:1 connected:1 highest:6 complexity:2 motivate:1 segment:10 serve:2 purely:1 upon:1 efficiency:2 cdi:3 completely:1 easily:1 joint:3 differently:1 represented:2 intersects:1 fast:1 describe:1 effective:1 monte:2 detected:1 artificial:1 query:1 aggregate:1 outcome:2 outside:1 choosing:1 neighborhood:1 quite:1 encoded:1 larger:2 solve:1 say:1 relax:1 otherwise:2 richardson:1 highlighted:1 advantage:2 rr:1 differentiable:2 matthias:2 sen:1 propose:2 turned:1 representational:1 description:1 snowball:1 normalize:1 scalability:2 getting:1 convergence:5 requirement:1 produce:1 generating:1 converges:4 object:1 help:1 depending:1 develop:1 propagating:1 measured:1 ij:1 strong:2 c:1 direction:19 wedge:1 radius:4 annotated:1 correct:2 quartile:1 kb:2 opinion:1 material:2 bin:1 require:1 abbeel:1 anonymous:1 summation:1 secondly:3 designate:1 strictly:1 hold:1 around:1 considered:1 ground:2 normal:1 exp:2 seed:1 predict:2 week:1 smallest:1 purpose:1 mlns:1 agmon:2 currently:2 individually:1 largest:1 tf:1 establishes:1 tool:1 weighted:2 create:1 lovasz:2 namata:1 gaussian:3 always:1 modified:1 rather:1 srl:1 avoid:1 kersting:1 varying:1 corollary:1 lise:1 improvement:3 vk:1 rank:2 likelihood:1 check:1 polyhedron:1 greatly:1 contrast:1 mainly:1 centroid:1 baseline:3 helpful:1 inference:3 mrfs:10 entire:2 typically:2 relation:2 koller:1 interested:1 provably:1 classification:11 denoted:1 constrained:5 integration:2 marginal:27 field:6 equal:1 construct:1 sampling:34 atom:1 identical:2 represents:1 park:2 constitutes:1 report:1 simplify:1 piecewise:1 randomly:2 frieze:1 divergence:11 national:1 geometry:3 phase:1 argminx:1 interest:1 investigate:2 rediscovered:1 evaluation:3 chain:3 integral:1 euclidean:1 logarithm:1 walk:7 theoretical:4 formalism:3 modeling:3 soft:1 column:1 xeon:1 measuring:1 raedt:1 applicability:1 cost:3 deviation:9 subset:6 vertex:1 entry:2 uniform:4 predicate:2 too:1 reported:1 dependency:2 answer:3 combined:1 density:20 siam:1 stay:1 probabilistic:16 off:1 together:1 connecting:1 thesis:1 central:1 nm:1 satisfied:1 postulate:1 choose:1 reflect:1 corner:15 potential:21 exclude:1 de:1 sec:1 wk:5 configured:1 satisfy:1 piece:2 try:1 picked:1 view:1 linked:1 kwk:1 portion:1 start:7 reached:1 slope:2 contribution:3 oi:3 il:1 accuracy:3 variance:2 efficiently:2 yield:2 sitting:1 bayesian:1 produced:1 carlo:2 rx:1 visualizes:1 processor:1 whenever:1 definition:2 sixth:1 against:3 mihalkova:1 associated:4 di:8 proof:2 propagated:1 sampled:3 newly:1 dataset:3 treatment:3 popular:3 stop:1 massachusetts:1 knowledge:3 dimensionality:4 stanley:1 back:1 higher:1 follow:2 methodology:1 improved:2 evaluated:1 rmns:1 just:1 stage:1 lastly:1 until:1 sketch:2 hand:1 expressive:1 overlapping:1 propagation:1 abort:1 defines:1 grows:1 grounding:3 usa:1 concept:1 true:1 former:1 equality:8 assigned:2 hence:3 analytically:1 nonzero:1 during:2 won:1 unnormalized:1 cosine:2 outline:1 demonstrate:2 reasoning:3 wise:1 novel:4 recently:1 ef:1 wikipedia:7 functional:1 attached:1 volume:10 thirdly:1 discussed:2 million:3 approximates:1 marginals:8 rkb:2 significant:2 ai:1 pm:1 similarly:1 mathematics:1 language:1 similarity:12 cornered:4 add:1 recent:2 scenario:1 store:1 inequality:19 wv:1 continue:1 seen:1 analyzes:1 additional:2 relaxed:1 determine:6 converge:1 period:2 rv:2 multiple:2 d0:1 exceeds:1 technical:2 cross:2 long:1 bach:1 devised:1 qi:2 prediction:10 mrf:4 involving:1 impact:1 n5:1 vision:1 metric:1 albert:1 iteration:5 normalization:2 kernel:2 grounded:1 histogram:4 represent:1 addition:5 background:1 interval:6 else:4 sudderth:1 rka:2 umd:1 asz:2 comment:1 logconcave:1 subject:1 induced:3 inconsistent:1 unaddressed:1 effectiveness:1 eliassi:1 call:1 counteracting:1 exceed:1 split:3 stephan:1 canadian:1 variety:1 iterate:1 xj:9 zi:1 independence:1 restrict:2 otj:1 reduce:1 br:1 psl:17 qj:2 whether:1 motivated:3 gb:1 proceed:1 repeatedly:1 useful:3 iterating:1 generally:1 kok:1 extensively:1 ph:1 hardware:1 category:9 reduced:1 generate:3 http:1 wiki:1 exist:1 nsf:1 track:1 discrete:2 express:1 putting:1 demonstrating:1 threshold:2 drawn:2 verified:1 ram:1 graph:1 relaxation:6 sum:3 run:12 counteract:1 uncertainty:1 family:2 throughout:1 decision:1 dy:1 bound:4 guaranteed:4 display:1 fold:7 annual:1 strength:1 adapted:2 constraint:49 idf:1 constrain:1 x2:6 n3:3 aspect:3 min:1 vempala:3 according:2 combination:2 ball:9 remain:1 smaller:1 across:2 wi:2 restricted:4 taken:2 ln:1 discus:3 describing:1 eventually:1 needed:1 dyer:1 tractable:1 serf:1 operation:1 observe:1 away:2 existence:1 original:2 assumes:2 running:3 denotes:1 remaining:1 graphical:2 maintaining:1 restrictive:1 build:1 hypercube:1 move:6 question:5 parametric:1 dependence:1 md:1 nr:1 dp:1 subspace:1 distance:3 link:3 maryland:1 separate:1 entity:3 thank:1 polytope:17 collected:1 reason:1 kannan:1 assuming:3 length:2 modeled:1 relationship:2 illustration:2 kk:2 executed:1 october:1 statement:1 relate:1 expense:1 negative:1 implementation:1 collective:8 unknown:1 zf:2 upper:1 discretize:1 markov:9 finite:2 relational:5 ccmrf:10 rn:2 arbitrary:2 introduced:2 pair:2 cast:1 specified:2 kl:11 rad:1 learned:2 textual:1 able:1 beyond:1 proceeds:1 xm:4 appeared:1 hyperlink:1 program:4 max:8 oj:1 including:1 belief:3 getoor:4 event:1 suitable:2 bilgic:1 turning:1 indicator:4 normality:1 scheme:13 improve:3 technology:1 brief:1 argmaxk:1 text:4 prior:1 ccmrfs:9 understanding:1 geometric:1 review:1 multiplication:1 determining:2 relative:3 tangent:1 removal:1 powered:1 expect:1 suggestion:1 proven:1 validation:1 foundation:1 integrate:2 degree:1 article:2 cd:2 row:5 repeat:2 supported:1 free:4 infeasible:1 allow:1 perceptron:1 institute:1 wide:1 correspondingly:1 ghz:1 slice:2 boundary:5 dimension:1 xn:1 distributed:2 avoids:1 resides:1 cumulative:3 ignores:1 author:2 made:3 stuck:5 kdk:1 computes:1 far:3 approximate:4 ignore:1 logic:8 clique:1 keep:1 active:12 investigating:1 uai:1 simonovits:1 summing:1 assumed:1 xi:5 discriminative:1 continuous:21 tailed:1 table:6 promising:1 learn:1 nature:1 terminate:2 zk:4 ca:4 sra:1 subintervals:4 obtaining:1 subinterval:1 argmaxx:1 complex:1 necessarily:2 domain:18 main:3 fx0:1 linearly:1 motivation:2 noise:2 ludwigs:1 x1:7 body:1 intel:1 cxi:1 en:1 slow:2 xmap:1 exponential:2 lie:4 formula:1 theorem:7 specific:3 showing:1 r2:1 list:1 dominates:1 exists:1 false:2 effectively:1 importance:1 magnitude:2 subrahmanian:1 intersection:4 logarithmic:1 cx:1 generalizing:1 likely:3 broecheler:2 visual:1 expressed:2 motzkin:2 contained:1 tracking:1 recommendation:1 monotonic:1 corresponds:2 truth:5 determines:1 acm:2 oct:1 goal:1 presentation:1 sized:1 twofold:1 feasible:8 hard:6 content:1 specifically:1 except:1 uniformly:7 determined:1 hyperplane:2 total:3 svd:1 experimental:1 college:2 support:2 mark:1 violated:2 evaluate:1 mcmc:4 d1:1
3,249
3,943
Optimal Bayesian Recommendation Sets and Myopically Optimal Choice Query Sets Paolo Viappiani? Department of Computer Science University of Toronto [email protected] Craig Boutilier Department of Computer Science University of Toronto [email protected] Abstract Bayesian approaches to utility elicitation typically adopt (myopic) expected value of information (EVOI) as a natural criterion for selecting queries. However, EVOI-optimization is usually computationally prohibitive. In this paper, we examine EVOI optimization using choice queries, queries in which a user is ask to select her most preferred product from a set. We show that, under very general assumptions, the optimal choice query w.r.t. EVOI coincides with the optimal recommendation set, that is, a set maximizing the expected utility of the user selection. Since recommendation set optimization is a simpler, submodular problem, this can greatly reduce the complexity of both exact and approximate (greedy) computation of optimal choice queries. We also examine the case where user responses to choice queries are error-prone (using both constant and mixed multinomial logit noise models) and provide worst-case guarantees. Finally we present a local search technique for query optimization that works extremely well with large outcome spaces. 1 Introduction Utility elicitation is a key component in many decision support applications and recommender systems, since appropriate decisions or recommendations depend critically on the preferences of the user on whose behalf decisions are being made. Since full elicitation of user utility is prohibitively expensive in most cases (w.r.t. time, cognitive effort, etc.), we must often rely on partial utility information. Thus in interactive preference elicitation, one must selectively decide which queries are most informative relative to the goal of making good or optimal recommendations. A variety of principled approaches have been proposed for this problem. A number of these focus directly on (myopically or heuristically) reducing uncertainty regarding utility parameters as quickly as possible, including max-margin [10], volumetric [12], polyhedral [22] and entropy-based [1] methods. A different class of approaches does not attempt to reduce utility uncertainty for its own sake, but rather focuses on discovering utility information that improves the quality of the recommendation. These include regret-based [3, 23] and Bayesian [7, 6, 2, 11] models. We focus on Bayesian models in this work, assuming some prior distribution over user utility parameters and conditioning this distribution on information acquired from the user (e.g., query responses or behavioral observations). The most natural criterion for choosing queries is expected value of information (EVOI), which can be optimized myopically [7] or sequentially [2]. However, optimization of EVOI for online query selection is not feasible except in the most simple cases. Hence, in practice, heuristics are used that offer no theoretical guarantees with respect to query quality. In this paper we consider the problem of myopic EVOI optimization using choice queries. Such queries are commonly used in conjoint analysis and product design [15], requiring a user to indicate which choice/product is most preferred from a set of k options. We show that, under very general assumptions, optimization of choice queries reduces to the simpler problem of choosing the optimal recommendation set, i.e., the set of k products such that, if a user were forced to choose one, ? From 9/2010 to 12/2010 at the University of Regina; from 01/2011 onwards at Aalborg University. 1 maximizes utility of that choice (in expectation). Not only is the optimal recommendation set problem somewhat easier computationally, it is submodular, admitting a greedy algorithm with approximation guarantees. Thus, it can be used to determine approximately optimal choice queries. We develop this connection under several different (noisy) user response models. Finally, we describe query iteration, a local search technique that, though it has no formal guarantees, finds near-optimal recommendation sets and queries much faster than either exact or greedy optimization. 2 Background: Bayesian Recommendation and Elicitation We assume a system is charged with the task of recommending an option to a user in some multiattribute space, for instance, the space of possible product configurations from some domain (e.g., computers, cars, rental apartments, etc.). Products are characterized by a finite set of attributes X = {X1 , ...Xn }, each with finite domain Dom(Xi ). Let X ? Dom(X ) denote the set of feasible configurations. For instance, attributes may correspond to the features of various cars, such as color, engine size, fuel economy, etc., with X defined either by constraints on attribute combinations (e.g., constraints on computer components that can be put together) or by an explicit database of feasible configurations (e.g., a rental database). The user has a utility function u : Dom(X ) ? R. The precise form of u is not critical, but in our experiments we assume that u(x; w) is linear in the parameters (or weights) w (e.g., as in generalized additive independent (GAI) models [8, 5].) We often refer to w as the user?s ?utility function? for simplicity, assuming a fixed form for u. A simple additive model in the car domain might be: u(Car ; w) = w1 f1 (MPG ) + w2 f2 (EngineSize) + w3 f3 (Color ). The optimal product x?w for a user with utility parameters w is the x ? X that maximizes u(x; w). Generally, a user?s utility function w will not be known with certainty. Following recent models of Bayesian elicitation, the system?s uncertainty is reflected in a distribution, or beliefs, P (w; ?) over the space W of possible utility functions [7, 6, 2]. Here ? denotes the parameterization of our model, and we often refer to R? as our belief state. Given P (?; ?), we define the expected utility of an option x to be EU (x; ?) = W u(x; w)P (w; ?)dw. If required to make a recommendation given belief ?, the optimal option x? (?) is that with greatest expected utility EU ? (?) = maxx?X EU (x; ?), with x? (?) = arg maxx?X EU (x; ?). In some settings, we are able to make set-based recommendations: rather than recommending a single option, a small set of k options can be presented, from which the user selects her most preferred option [15, 20, 23]. We discuss the problem of constructing an optimal recommendation set S further below. Given recommendation set S with x ? S, let S ? x denote that x has the greatest utility among those items in S (for a given utility function w). Given feasible utility space W , we define W ? S ? x ? {w ? W : u(x; w) ? u(y; w), ?y 6= x, y ? S} to be those utility functions satisfying S ? x. Ignoring ?ties? over full-dimensional subsets of W (which are easily dealt with, but complicate the presentation), the regions W ? S ? xi , xi ? S, partition utility space. A recommender system can refine its belief state ? by learning more about the user?s utility function w. A reduction in uncertainty will lead to better recommendations (in expectation). While many sources of information can be used to assess a user preferences?including the preferences of related users, as in collaborative filtering [14], or observed user choice behavior [15, 19]?we focus on explicit utility elicitation, in which a user is asked questions about her preferences. There are a variety of query types that can be used to refine one?s knowledge of a user?s utility function (we refer to [13, 3, 5] for further discussion). Comparison queries are especially natural, asking a user if she prefers one option x to another y. These comparisons can be localized to specific (subsets of) attributes in additive or GAI models, and such structured models allow responses w.r.t. specific options to ?generalize,? providing constraints on the utility of related options. In this work we consider the extension of comparions to choice sets of more than two options [23] as is common in conjoint analysis [15, 22]. Any set S can be interpreted as a query: the user states which of the k elements xi ? S she prefers. We refer to S interchangeably as a query or a choice set. The user?s response to a choice set tells us something about her preferences; but this depends on the user response model. In a noiseless model, the user correctly identifies the preferred item in the slate: the choice of xi ? S refines the set of feasible utility functions W by imposing k ? 1 linear constraints of the form u(xi ; w) ? u(xj ; w), j 6= i, and the new belief state is obtained by 2 restricting ? to have non-zero density only on W ?S ? xi and renormalizing. More generally, a noisy response model allows that a user may select an option that does not maximize her utility. For any choice set S with xi ? S, let S xi denote the event of the user selecting xi . A response model R dictates, for any choice set S, the probability PR (S xi ; w) of any selection given utility R function w. When the beliefs about a user?s utility are uncertain, we define PR (S xi ; ?) = W PR (S xi ; w)P (w; ?)dw. We discuss various response models below. When treating S as a query set (as opposed to a recommendation set), we are not interested in its expected utility, but rather in its expected value of information (EVOI), or the (expected) degree to which a response will increase the quality of the system?s recommendation. We define: Definition 1 Given belief state ?, the expected posterior utility (EPU ) of query set S under R is X EPU R (S; ?) = PR (S x; ?) EU ? (?|S x) (1) x?S EVOI (S; ?) is then EPU (S; ?) ? EU ? (?), the expected improvement in decision quality given S. An optimal query (of fixed size k) is any S with maximal EV OI, or equivalently, maximal EPU . In many settings, we may wish to present a set of options to a user with the dual goals of offering a good set of recommendations and eliciting valuable information about user utility. For instance, product navigation interfaces for e-commerce sites often display a set of options from which a user can select, but also give the user a chance to critique the proposed options [24]. This provides one motivation for exploring the connection between optimal recommendation sets and optimal query sets. Moreover, even in settings where queries and recommendation are separated, we will see that query optimization can be made more efficient by exploiting this relationship. 3 Optimal Recommendation Sets We consider first the problem of computing optimal recommendation sets given the system?s uncertainty about the user?s true utility function w. Given belief state ?, if a single recommendation is to be made, then we should recommend the option x? (?) that maximizes expected utility EU (x, ?). However, there is often value in suggesting a ?shortlist? containing multiple options and allowing the user to select her most preferred option. Intuitively, such a set should offer options that are diverse in the following sense: recommended options should be highly preferred relative to a wide range of ?likely? user utility functions (relative to ?) [23, 20, 4]. This stands in contrast to some recommender systems that define diversity relative to product attributes [21], with no direct reference to beliefs about user utility. It is not hard to see that ?top k? systems, those that present the k options with highest expected utility, do not generally result in good recommendation sets [20]. In broad terms, we assume that the utility of a recommendation set S is the utility of its most preferred item. However, it is unrealistic to assume that users will select their most preferred item with complete accuracy [17, 15]. So as with choice queries, we assume a response model R dictating the probability PR (S x; ?) of any choice x from S: Definition 2 The expected utility of selection (EUS) of recommendation set S given ? and R is: X EUS R (S; ?) = PR (S x; ?)EU (x; ?|S x) (2) x?S We can expand the definition to rewrite EUS R (S; ?) as: Z X EUS R (S; ?) = [ PR (S x; w) u(x; w)]P (w; ?)dw (3) W x?S User behavior is largely dictated by the response model R. In the ideal setting, users would always select the option with highest utility w.r.t. her true utility function w. This noiseless model is assumed in [20] for example. However, this is unrealistic in general. Noisy response models admit user ?mistakes? and the choice of optimal sets should reflect this possibility (just as belief update does, 3 see Defn. 1). Possible constraints on response models include: (i) preference bias: a more preferred outcome in the slate given w is selected with probability greater than a less preferred outcome; and (ii) Luce?s choice axiom [17], a form of independence of irrelevant alternatives that requires that the relative probability (if not 0 or 1) of selecting any two items x and y from S is not affected by the addition or deletion of other items from the set. We consider three different response models: Q ? In the noiseless response model, RNL , we have PNL (S x; w) = y?S I[u(x; w) ? u(y; w)] (with indicator function I). Then EUS becomes Z [max u(x; w)]P (w; ?)dw. EUS NL (S; ?) = W x?S This is identical to the expected max criterion of [20]. Under RNL we have S x iff S ? x. ? The constant noise model RC assumes a multinomial distribution over choices or responses where each option x, apart from the most preferred option x?w relative to w, is selected with (small) constant probability PC (S x; w) = ?, with ? independent of w. We assume ? < k1 , so the most preferred option is selected with probability PC (S x?w ; w) = ? = 1 ? (k ? 1)? > ?. This generalizes the model used in [10, 2] to sets of any size. If x?w (S) the optimal element in S given w, and u?w (S) is its utility, then EUS is: Z X ?u(x; w)]P (w; ?)dw EUS C (S; ?) = [?u?w (S) + W y?S?{x? w (S)} ? The logistic response model RL is commonly used in choice modeling, and is variously known as the Luce-Sheppard [16], Bradley-Terry [11], or mixed multinomial logit model. Selection probabilities are given by PL (S x; w) = P exp(?u(x;w)) exp(?u(y;w)) , where ? is a temperature parameter. y?S For comparison queries (i.e., |S| = 2), RL is the logistic function of the difference in utility between the two options. We now consider properties of the expected utility of selection EUS under these various models. All three models satisfy preference bias, but only RNL and RL satisfy Luce?s choice axiom. EUS is monotone under the noiseless response model RNL : the addition of options to a recommendation set S cannot decrease its expected utility EUS NL (S; ?). Moreover, say that option xi dominates xj relative to belief state ?, if u(xi ; w) > u(xj ; w) for all w with nonzero density. Adding a set-wise dominated option x to S (i.e., an x dominated by some element of S) does not change expected utility under RNL : EUS NL (S ? {x}; ?) = EUS NL (S; ?). This stands in constrast to noisy response models, where adding dominated options might actually decrease expected utility. Importantly, EUS is submodular for both the noiseless and the constant response models RC : Theorem 1 For R ? {RN L , RC }, EUS R is a submodular function of the set S. That is, given recommendation sets S ? Q, option x 6? S, S ? = S ? {x}, and Q? = Q ? {x}, we have: EUS R (S ? ; ?) ? EUS R (S; ?) ? EUS R (Q? ; ?) ? EUS R (Q; ?) (4) The proof is omitted, but simply shows that EUS has the required property of diminishing returns. Submodularity serves as the basis for a greedy optimization algorithm (see Section 5 and worst-case results on query optimization below). EUS under the commonly used logistic response model RL is not submodular, but can be related to EUS under the noiseless model?as we discuss next?allowing us to exploit submodularity of the noiseless model when optimizing w.r.t. RL . 4 The Connection between EUS and EPU We now develop the connection between optimal recommendation sets (using EUS) and optimal choice queries (using EPU/EVOI). As discussed above, we?re often interested in sets that can serve as both good recommendations and good queries; and since EPU/EVOI can be computationally difficult, good methods for EUS-optimization can serve to generate good queries as well if we have a tight relationship between the two. 4 In the following, we make use of a transformation T?,R that modifies a set S in such a way that EUS usually increases (and in the case of RNL and RC cannot decrease). This transformation is used in two ways: (i) to prove the optimality (near-optimality in the case of RL ) of EUS-optimal recommendation sets when used as query sets; (ii) and directly as a computationally viable heuristic strategy for generating query sets. Definition 3 Let S = {x1 , ? ? ? , xk } be a set of options. Define: T?,R (S) = {x? (?|S ? where x (?|S w.r.t. R. x1 ; R), ? ? ? , x? (?|S xk ; R)} xi ; R) is the optimal option (in expectation) when ? is conditioned on S xi Intuitively, T (we drop the subscript when ?, R are clear from context) refines a recommendation set S of size k by producing k updated beliefs for each possible user choice, and replacing each option in S with the optimal option under the corresponding update. Note that T generally produces different sets under different response models. Indeed, one could use T to construct a set using one response model, and measure EUS or EPU of the resulting set under a different response model. Some of our theoretical results use this type of ?cross-evaluation.? We first show that optimal recommendation sets under both RNL and RC are optimal (i.e., EPU/EVOI-maximizing) query sets. Lemma 1 EUS R (T?,R (S); ?) ? EPU R (S; ?) for R ? {N L, C} Proof: For the RNL , the argument relies on partitioning W w.r.t. options in S: EPU NL (S; ?) = X ? ? ? ? ? ? P (S ? xi , T (S) ? xj ; ?)EU (xi , ?[S ? xi , T (S) ? xj ]) (5) i,j EUS NL (T (S); ?) = X P (S ? xi , T (S) ? xj ; ?)EU (xj ; ?[S ? xi , T (S) ? xj ]) (6) i,j Compare the two expression componentwise: 1) If i = j then the components of each expression are the same. 2) If i 6= j, for any w with nonzero density in ?[S ? xi , T (S) ? x?j ], we have u(x?j ; w) ? u(x?i , w), thus EU (x?j ) ? EU (xi ) in the region S ? xi , T (S) ? x?j . Since EUS NL (T (S); ?) ? P EPU NL (S; ?) in each component, the result follows. For RC the proof uses the same argument, along with the observation that: EUS C (S; ?) = i P (S ? P xi ; ?)(? EU (xi , ?[S ? xi ]) + ? j6=i EU (sj , ?[S ? xi ])). From Lemma 1 and the fact that EUS R (S; ?) ? EP UR (S, ?), it follows that EUS R (T (S); ?) ? EUS R (S; ?). We now state the main theorem (we assume the size k of S is fixed): Theorem 2 Assume response model R ? {N L, C} and let S ? be an optimal recommendation set. Then S ? is an optimal query set: EPU (S ? ; ?) ? EPU (S; ?), ?S ? Xk Proof: Suppose S ? is not an optimal query set, i.e., there is some S s.t. EPU (S; ?) > EPU (S ? ; ?). Applying T to S gives a new query set T (S), which by the results above shows: EUS (T (S); ?) ? EPU (S; ?) > EPU (S ? ; ?) ? EUS (S ? ; ?). This contradicts the EUS-optimality of S ? . Another consequence of Lemma 1 is that posing a query S involving an infeasible option is pointless: there is always a set with only elements in X with EPU/EVOI at least as great. This is proved by observing the lemma still holds if T is redefined to allow sets containing infeasible options. It is not hard to see that admitting noisy responses under the logistic response model RL can decrease the value of a recommendation set, i.e., EUS L (S; ?) ? EUS N L (S; ?). However, the loss in EUS under RL can in fact be bounded. The logistic response model is such that, if the probability of incorrect selection of some option is high, then the utility of that option must be close to that of the best item, so the relative loss in utility is small. Conversely, if the loss associated with some incorrect selection is great, its utility must be significantly less than that of the best option, rendering such an event extremely unlikely. This allows us to bound the difference between EUS NL and EUS L at some value ?max that depends only on the set cardinality k and on the temperature parameter ? (we derive an expression for ?max below): Theorem 3 EUS L (S; ?) ? EUS NL (S; ?) ? ?max . Under RL , our transformation TL does not, in general, improve the value EUS L (S) of a recommendation set S. However the set TL (S) is such that its value EUS NL , assuming selection under the noiseless model, is greater than the expected posterior utility EPU L (S) under RL : 5 Lemma 2 EUS N L (TL (S); ?) ? EPU L (S; ?) We use this fact below to prove the optimal recommendation set under RL is a near-optimal query under RL . It has two other consequences: First, from Thm. 3 it follows that EUS L (TL (S); ?) ? EPU L (S; ?) ? ?max . Second, EPU of the optimal query under the noiseless model is at least as great that of the optimal query under the logistic model: EPU ?N L (?) ? EPU ?L (?).1 We now derive our main result for logistic responses: the EUS of the optimal recommendation set (and hence its EPU) is at most ?max less than the EPU of the optimal query set. Theorem 4 EUS ?L (?) ? EPU ?L (?) ? ?max . Proof: Consider the optimal query SL? and the set S ? = TL (SL? ) obtained by applying TL . From Lemma 2, EUS NL (S ? ; ?) ? EPU L (SL? , ?) = ? ? ? ? ? EPU ? L (?). From Thm. 3, EUS L (S ; ?) ? EUS NL (S ; ?) ? ?max ; and from Thm. 2, EUS NL (?) = EPU NL (?). Thus EUS L (?) ? EUS L (S ? ; ?) ? EUS NL (S ? ; ?) ? ?max ? EPU ? L (?) ? ?max The loss ?(S; ?) = EUS NL (S; ?) ? EUS L (S; ?) in the EUS of set S due to logistic noise can be characterized as a function of the utility difference z = u(x1 ) ? u(x2 ) between options x1 and x2 of S, and integrating over the possible values of z (weighted by ?). For a specific value of z ? 0, EUS-loss is exactly the utility difference z times the probability of choosing the less preferred option under RL : 1 ? L(?z) = L(??z) where L is the logistic function. We have R +? ?(S; ?) = ?? |z| ? 1+e1?|z| P (z; ?)dz. We derive a problem-independent upper bound on ?(S; ?) for any S, ? by maximizing f (z) = z ? 1+e1 ?z with z ? 0. The maximal loss ?max = f (zmax ) for a set of two hypothetical items s1 and s2 is attained by having the same utility difference u(s1 , w) ? ??z ? ?z + 1 = 0. Numerically, u(s2 , w) = zmax for any w ? W . By imposing ?f ?z = 0, we obtain e 1 1 this yields zmax ? 1.279 ? and ?max ? 0.2785 ? . This bound can be expressed on a scale that is independent of the temperature parameter ?; intuitively, ?max corresponds to a utility difference so slight that the user identifies the best item only with probability 0.56 under RL with temperature ?. In other words, the maximum loss is so small that the user is unable able to identify the preferred item 44% of the time when asked to compare the two items in S. This derivation can be generalized 2 to sets of any size k, yielding ?kmax = ?1 ? LW( k?1 e ), where LW (?) is the Lambert W function. 5 Set Optimization Strategies We discuss several strategies for the optimization of query/recommendation sets in this section, and summarize their theoretical and computational properties. In what follows, n is the number of options |X|, k the size of the query/recommendation set, and l is the ?cost? of Bayesian inference (e.g., the number of particles in a Monte Carlo sampling procedure). Exact Methods The naive maximization of EPU is more computationally intensive than EUSoptimization, and is generally impractical. Given a set S of k elements, computing EPU (S, ?) requires Bayesian update of ? for each possible response, and expected utility optimization for each such posterior. Query optimization requires this be computed for nk possible query sets. Thus EPU maximization is O(nk+1 kl). Exact EUS optimization, while still quite demanding, is only O(nk kl) as it does not require EU-maximization in updated distributions. Thm. 2 allows us to compute optimal query sets using EUS-maximization under RC and RNL , reducing complexity by a factor of n. Under RL , Thm. 4 allows us to use EUS-optimization to approximate the optimal query, with a quality guarantee of EPU ? ? ?max . Greedy Optimization A simple greedy algorithm can be used construct a recommendation set of size k by iteratively adding the option offering the greatest improvement in value: arg maxx EUS R (S ? {x}; ?). Under RNL and RC , since EUS is submodular (Thm. 1), the k greedy algorithm determines a set with EUS that is within ? = 1 ? ( k?1 k ) of the optimal value 1 EPU L (S; ?) is not necessarily less than EPU NL (S; ?): there are sets S for which a noisy response might be ?more informative? than a noiseless one. However, this is not the case for optimal query sets. 2 Lambert W, or product-log, is defined as the principal value of the inverse of x ? ex . The loss-maximizing set Smax may contain infeasible outcomes; so in practice loss may be much lower. 6 EU S ? = EP U ? [9].3 Thm. 2 again allows us to use greedy maximization of EUS to determine a query set with similar gaurantees. Under RL , EUS L is no longer submodular. However, Lemma 2 and Thm. 3 allow us to use EUS NL , which is submodular, as a proxy. Let Sg the set determined by greedy optimization of EUS NL . By submodularity, ? ? EUS ?NL ? EUS NL (Sg ) ? EUS ?N L ; we also have EUS ?L ? EUS ?NL . Applying Thm. 3 to Sg gives: EUS L (Sg ) ? EUS N L (Sg ) ? ?. Thus, we derive EUS L (Sg ) ? ? EUS ?N L ? ? ? ? EUS ?N L ? ? ? ? ? ??? ? ? EUS L EUS L EUS ?N L EUS ?N L (7) Similarly, we derive a worst-case bound for EPU w.r.t. greedy EUS-optimization (using the fact that EUS is a lower bound for EPU, Thm. 3 and Thm. 2): EUS L (Sg ) ? ? EUS ?N L ? ? ? ? EUS ?N L ? ? ? EPU L (Sg ) ? ? = ??? ? ? ? ? EPU L EPU L EPU N L EUS N L EUS ?N L (8) Greedy maximization of S w.r.t. EUS is extremely fast, O(k 2 ln), or linear in the number of options n: it requires O(kn) evaluations of EUS , each with cost kl.4 Query Iteration The T transformation (Defn. 3) gives rise to a natural heuristic method for computing, good query/recommendation sets. Query iteration (QI) starts with an initial set S, and locally optimizes S by repeatedly applying operator T (S) until EUS (T (S); ?)=EUS (S; ?). QI is sensitive to the initial set S, which can lead to different fixed points. We consider several initialization strategies: random (randomly choose k options), sampling (include x? (?), and sample k ? 1 points wi from P (w; ?), and for each of these add the optimal item to S, while forcing distinctness) and greedy (initialize with the greedy set Sg ). We can bound the performance of QI relative to optimal query/recommendation sets assuming RNL or RC . If QI is initialized with Sg , performance is no worse than greedy optimization. If initialized with an arbitrary set, we note that, because of submodularity, EU ? ? EUS ? ? kEU ? . The condition T (S) = S implies EUS (S) = EPU (S). Also note that, for any set Q, EPU (Q) ? EU ? . Thus, EUS (S) ? k1 EUS ? . This means for comparison queries (|S| = 2), QI achieves at least 50% of the optimal recommendation set value. This bound is tight and corresponds to the singleton degenerate set Sd = {x? (?), .., x? (?)} = {x? (?)}. This solution is problematic since T (Sd ) = Sd and has EVOI of zero. However, under RNL , QI with sampling initialization avoids this fixed point provably by construction, always leading to a query set with positive EVOI. Complexity of one iteration of QI is O(nk + lk), i.e., linear in the number of options, exactly like Greedy. However, in practice it is much faster than Greedy since typically k << l. While we have no theoretical results that limit the iterations required by QI to converge, in practice, a fixed point is reached in very quickly (see below). Evaluation We compare the strategies above empirically on choice problems with random user utility functions using both noiseless and noisy response models.5 Bayesian inference is realized by a Monte Carlo method with importance sampling (particle weights are determined by applying the response model to observed responses). To overcome the problem of particle degeneration (most particles eventually have low or zero weight), we use slice-sampling [18] to regenerate particles w.r.t. to the response-updated belief state ? whenever the effective number of samples drops significantly (50000 particles were used in the simulations). Figure 1(a) shows the average loss of our strategies in an apartment rental dataset, with 187 outcomes, each characterized by 10 attributes (either numeric or categorical with domain sizes 2?6), when asking pairwise comparison queries with noiseless responses. We note that greedy performs almost as well as exact optimization, and the optimal item is found in roughly 10?15 queries. Query iteration performs reasonably well when initialized with sampling, but poorly with random seeds. 3 This is 75% for comparison queries (k = 2) and at worst 63% (as k ? ?). A practical speedup can be achieved by maintaining a priority queue of outcomes sorted by their potential EUS-contribution (monotonically decreasing due to submodularity). When choosing the item to add to the set, we only need to evaluate a few outcomes at the top of the queue (lazy evaluation). 5 Utility priors are mixtures of 3 Gaussians with ? = U [0, 10] and ? = ?/3 for each component. 4 7 exactEUS greedy(EUS,NL) QI(sampling) QI(rand) random 0.8 0.6 0.4 0.8 0.6 0.4 0.2 0.2 0 QI(greedy,L) greedy(EUS,L) greedy(EUS,NL) QI(sampling,NL) QI(rand,L) random 1 normalized average loss normalized average loss 1 0 5 10 15 20 0 25 0 5 10 15 20 25 30 number of queries number of queries (a) Average Loss (187 outcomes, 30 runs, RNL ) (b) Average Loss (506 outcomes, 30 runs, RL ) In the second experiment, we consider the Boston Housing dataset with 506 items (1 binary and 13 continous attributes) and a logistic noise model for responses with ? = 1. We compare the greedy and QI strategies (exact methods are impractical on problems of this size) in Figure 1(b); we also consider a hybrid greedy(EUS,NL) strategy that optimizes ?assuming? noiseless responses, but is evaluated using the true response model RL . QI(sampling) is more efficient when using TNL instead of TL and this is the version plotted. Overall these experiments show that (greedy or exact) maximization of EUS is able to find optimal?or near-optimal when responses are noisy?query sets. Finally, we compare query optimization times on the two datasets in the following table: n=30, k=2 n=187, k=2 n=187, k=4 n=187, k=6 n=506, k=2 n=506, k=4 n=506, k=6 exactEPU exactEUS greedy(EPU,L) QI(greedy(EUS,L)) greedy(EUS,L) greedy(EUS,NL) QI(sampling) QI(rand) 47.3s 1815s - 10.3s 405s 10000s - 1.5s 9.19s 39.7s 87.1s 14.6s 64.9s 142s 0.76s 2.07s 7.89s 15.7s 4.09s 15.4s 32.9s 0.65s 1.97s 7.71s 15.4s 3.99s 15.2s 32.8s 0.12s 1.02s 1.86s 2.55s 0.93s 1.12s 1.53s 0.11s 0.15s 0.16s 0.51s 0.05s 0.08s 0.09s 0.11s 0.17s 0.19s 0.64s 0.06s 0.10s 0.13s Among our strategies, QI is certainly most efficient computationally, and is best suited to large outcome spaces. Interestingly, QI is often faster with sampling initialization than with random initialization because it needs fewer iteration on average before convergence (3.1 v.s. 4.0). 6 Conclusions We have provided a novel analysis of set-based recommendations in Bayesian recommender systems, and have shown how it is offers a tractable means of generating myopically optimal or nearoptimal choice queries for preference elicitation. We examined several user response models, showing that optimal recommendation sets are EVOI-optimal queries under noiseless and constant noise models; and that they are near-optimal under the logistic/Luce-Sheppard model (both theoretically and practically). We stress that our results are general and do not depend on the specific implementation of Bayesian update, nor on the specific form of the utility function. Our greedy strategies? exploiting submodularity of EUS computation?perform very well in practice and have theoretical approximation guarantees. Finally our experimental results demonstrate that query iteration, a simple local search strategy, is especially well-situated to large decision spaces. A number of important directions for future research remain. Further theoretical and practical investigation of local search strategies such as query iteration is important. Another direction is the development of strategies for Bayesian recommendation and elicitation in large-scale configuration problems, e.g., where outcomes are specified by a CSP, and for sequential decision problems (such as MDPs with uncertain rewards). Finally, we are interested in elicitation strategies that combine probabilistic and regret-based models. Acknowledgements The authors would like to thank Iain Murray and Cristina Manfredotti for helpful discussion on Monte Carlo methods, sampling techniques and particle filters. This research was supported by NSERC. 8 References [1] Ali Abbas. Entropy methods for adaptive utility elicitation. IEEE Transactions on Systems, Science and Cybernetics, 34(2):169?178, 2004. [2] Craig Boutilier. A POMDP formulation of preference elicitation problems. In Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI-02), pp.239?246, Edmonton, 2002. [3] Craig Boutilier, Relu Patrascu, Pascal Poupart, and Dale Schuurmans. Constraint-based optimization and utility elicitation using the minimax decision criterion. Artifical Intelligence, 170(8?9):686?713, 2006. [4] Craig Boutilier, Richard S. Zemel, and Benjamin Marlin. Active collaborative filtering. In Proc. 19th Conference on Uncertainty in Artificial Intelligence (UAI-03), pp.98?106, Acapulco, 2003. [5] Darius Braziunas and Craig Boutilier. Minimax regret-based elicitation of generalized additive utilities. In Proc. 23rd Conference on Uncertainty in Artificial Intelligence (UAI-07), pp.25?32, Vancouver, 2007. [6] U. Chajewska and D. Koller. Utilities as random variables: Density estimation and structure discovery. In Proc. 16th Conference on Uncertainty in Artificial Intelligence (UAI-00), pp.63?71, Stanford, 2000. [7] U. Chajewska, D. Koller, and R. Parr. Making rational decisions using adaptive utility elicitation. In Proc. 17th National Conference on Artificial Intelligence (AAAI-00), pp.363?369, Austin, TX, 2000. [8] Peter C. Fishburn. Interdependence and additivity in multivariate, unidimensional expected utility theory. International Economic Review, 8:335?342, 1967. [9] L. A. Wolsey G. L. Nemhauser and M. L. Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, 14(1):265?294, December 1978. [10] Krzysztof Gajos and Daniel S. Weld. Preference elicitation for interface optimization. In Patrick Baudisch, Mary Czerwinski, and Dan R. Olsen, editors, UIST, pp.173?182. ACM, 2005. [11] Shengbo Guo and Scott Sanner. Real-time multiattribute bayesian preference elicitation with pairwise comparison queries. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS-10), Sardinia, Italy, 2010. [12] V. S. Iyengar, J. Lee, and M. Campbell. Q-Eval: Evaluating multiple attribute items using queries. In Proceedings of the Third ACM Conference on Electronic Commerce, pp.144?153, Tampa, FL, 2001. [13] Ralph L. Keeney and Howard Raiffa. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Wiley, New York, 1976. [14] J. A. Konstan, B. N. Miller, D. Maltz, J. L. Herlocker, L. R. Gordon, and J. Riedl. Grouplens: Applying collaborative filtering to usenet news. Communications of the ACM, 40(3):77?87, 1997. [15] Jordan J. Louviere, David A. Hensher, and Joffre D. Swait. Stated Choice Methods: Analysis and Application. Cambridge University Press, Cambridge, 2000. [16] Christopher G. Lucas, Thomas L. Griffiths, Fei Xu, and Christine Fawcett. A rational model of preference learning and choice prediction by children. In Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, Canada, 2008, pp.985?992, 2008. [17] Robert D. Luce. Individual choice behavior: a theoretical analysis. Wiley, New York, 1959. [18] Radford M. Neal. Slice sampling. The Annals of Statistics, 31(3):705?70, 2003. [19] A. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proc. 17th International Conference on Machine Learning (ICML-00), pp.663?670, Stanford, CA, 2000. [20] Robert Price and Paul R. Messinger. Optimal recommendation sets: Covering uncertainty over user preferences. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI?05), pp.541?548, 2005. [21] James Reilly, Kevin McCarthy, Lorraine McGinty, and Barry Smyth. Incremental critiquing. KnowledgeBased Systems, 18(4?5):143?151, 2005. [22] Olivier Toubia, John Hauser, and Duncan Simester. Polyhedral methods for adaptive choice-based conjoint analysis. Journal of Marketing Research, 41:116?131, 2004. [23] Paolo Viappiani and Craig Boutilier. Regret-based optimal recommendation sets in conversational recommender systems. In Proceedings of the 3rd ACM Conference on Recommender Systems (RecSys09), pp.101?108, New York, 2009. [24] Paolo Viappiani, Boi Faltings, and Pearl Pu. Preference-based search using example-critiquing with suggestions. Journal of Artificial Intelligence Research, 27:465?503, 2006. 9
3943 |@word version:1 logit:2 heuristically:1 simulation:1 lorraine:1 reduction:1 initial:2 configuration:4 cristina:1 selecting:3 daniel:1 offering:2 interestingly:1 bradley:1 com:1 gmail:1 must:4 john:1 refines:2 additive:4 partition:1 informative:2 treating:1 drop:2 update:4 greedy:29 prohibitive:1 discovering:1 item:16 parameterization:1 selected:3 fewer:1 xk:3 intelligence:9 zmax:3 keu:1 provides:1 shortlist:1 toronto:3 preference:16 simpler:2 rc:9 along:1 mathematical:1 direct:1 rnl:13 viable:1 incorrect:2 prove:2 combine:1 dan:1 behavioral:1 polyhedral:2 interdependence:1 theoretically:1 pairwise:2 acquired:1 indeed:1 uist:1 roughly:1 mpg:1 examine:2 nor:1 expected:21 behavior:3 decreasing:1 cardinality:1 becomes:1 provided:1 moreover:2 bounded:1 maximizes:3 fuel:1 what:1 interpreted:1 marlin:1 transformation:4 impractical:2 guarantee:6 certainty:1 hypothetical:1 interactive:1 tie:1 exactly:2 prohibitively:1 partitioning:1 producing:1 positive:1 before:1 shengbo:1 local:4 sd:3 mistake:1 consequence:2 limit:1 usenet:1 critique:1 subscript:1 approximately:1 might:3 initialization:4 evoi:16 examined:1 conversely:1 range:1 commerce:2 practical:2 practice:5 regret:4 procedure:1 axiom:2 maxx:3 significantly:2 dictate:1 reilly:1 hensher:1 word:1 integrating:1 griffith:1 cannot:2 close:1 selection:9 operator:1 put:1 context:1 applying:6 kmax:1 charged:1 dz:1 maximizing:5 modifies:1 eighteenth:1 chajewska:2 pomdp:1 simplicity:1 constrast:1 iain:1 importantly:1 dw:5 distinctness:1 updated:3 annals:1 construction:1 suppose:1 user:48 exact:7 programming:1 smyth:1 us:1 olivier:1 element:5 expensive:1 satisfying:1 database:2 observed:2 ep:2 worst:4 apartment:2 region:2 degeneration:1 news:1 eu:129 decrease:4 highest:2 russell:1 valuable:1 principled:1 benjamin:1 complexity:3 reward:1 asked:2 dom:3 depend:2 rewrite:1 tight:2 ali:1 serve:2 f2:1 basis:1 easily:1 slate:2 various:3 tx:1 derivation:1 additivity:1 separated:1 forced:1 fast:1 describe:1 effective:1 monte:3 query:75 artificial:8 tell:1 zemel:1 kevin:1 outcome:11 choosing:4 whose:1 heuristic:3 quite:1 stanford:2 say:1 statistic:2 noisy:8 online:1 housing:1 product:10 maximal:3 viappiani:4 iff:1 degenerate:1 poorly:1 exploiting:2 convergence:1 smax:1 produce:1 renormalizing:1 generating:2 incremental:1 knowledgebased:1 derive:5 develop:2 c:1 indicate:1 implies:1 critiquing:2 direction:2 submodularity:6 attribute:8 filter:1 require:1 regina:1 f1:1 investigation:1 acapulco:1 extension:1 exploring:1 pl:1 hold:1 practically:1 exp:2 great:3 seed:1 parr:1 achieves:1 adopt:1 omitted:1 estimation:1 proc:5 grouplens:1 sensitive:1 weighted:1 iyengar:1 always:3 csp:1 rather:3 focus:4 she:2 improvement:2 braziunas:1 tnl:1 greatly:1 contrast:1 sense:1 helpful:1 inference:2 economy:1 typically:2 unlikely:1 diminishing:1 her:7 koller:2 expand:1 selects:1 interested:3 provably:1 ralph:1 arg:2 among:2 dual:1 pascal:1 overall:1 lucas:1 development:1 initialize:1 construct:2 f3:1 having:1 ng:1 sampling:13 identical:1 broad:1 icml:1 future:1 recommend:1 gordon:1 richard:1 few:1 aalborg:1 randomly:1 national:3 individual:1 variously:1 defn:2 attempt:1 onwards:1 highly:1 possibility:1 eval:1 evaluation:4 certainly:1 navigation:1 mixture:1 nl:28 admitting:2 pc:2 yielding:1 myopic:2 partial:1 initialized:3 re:1 plotted:1 keeney:1 theoretical:7 uncertain:2 instance:3 modeling:1 asking:2 maximization:7 cost:2 subset:2 nearoptimal:1 kn:1 hauser:1 density:4 international:3 probabilistic:1 lee:1 together:1 quickly:2 w1:1 again:1 reflect:1 aaai:3 opposed:1 choose:2 containing:2 fishburn:1 priority:1 worse:1 cognitive:1 admit:1 leading:1 return:1 suggesting:1 potential:1 diversity:1 singleton:1 satisfy:2 depends:2 toubia:1 observing:1 reached:1 start:1 option:48 collaborative:3 ass:1 oi:1 contribution:1 accuracy:1 largely:1 miller:1 correspond:1 yield:1 identify:1 dealt:1 generalize:1 bayesian:13 lambert:2 craig:6 critically:1 carlo:3 comparions:1 cybernetics:1 j6:1 whenever:1 complicate:1 volumetric:1 definition:4 pp:11 james:1 proof:5 associated:1 rational:2 proved:1 dataset:2 ask:1 color:2 car:4 improves:1 knowledge:1 actually:1 campbell:1 attained:1 reflected:1 response:42 rand:3 formulation:1 evaluated:1 though:1 just:1 marketing:1 until:1 replacing:1 christopher:1 logistic:11 quality:5 mary:1 requiring:1 true:3 contain:1 normalized:2 hence:2 nonzero:2 iteratively:1 neal:1 interchangeably:1 covering:1 coincides:1 criterion:4 generalized:3 stress:1 complete:1 demonstrate:1 performs:2 interface:2 temperature:4 christine:1 wise:1 novel:1 common:1 multinomial:3 rl:18 empirically:1 conditioning:1 discussed:1 slight:1 numerically:1 refer:4 cambridge:2 imposing:2 rd:2 similarly:1 particle:7 submodular:9 longer:1 rental:3 etc:3 add:2 patrick:1 something:1 pu:1 posterior:3 own:1 recent:1 dictated:1 multivariate:1 optimizing:1 irrelevant:1 apart:1 optimizes:2 forcing:1 italy:1 binary:1 greater:2 somewhat:1 determine:2 maximize:1 converge:1 recommended:1 monotonically:1 ii:2 barry:1 full:2 multiple:3 reduces:1 faster:3 characterized:3 offer:3 cross:1 e1:2 qi:20 prediction:1 involving:1 noiseless:14 expectation:3 iteration:9 fawcett:1 abbas:1 achieved:1 background:1 addition:2 source:1 myopically:4 w2:1 december:1 jordan:1 near:5 ideal:1 rendering:1 variety:2 xj:8 independence:1 relu:1 w3:1 reduce:2 regarding:1 economic:1 unidimensional:1 luce:5 tradeoff:1 intensive:1 expression:3 utility:67 effort:1 queue:2 peter:1 york:3 prefers:2 dictating:1 repeatedly:1 boutilier:6 generally:5 clear:1 locally:1 situated:1 generate:1 sl:3 problematic:1 correctly:1 multiattribute:2 diverse:1 paolo:4 affected:1 key:1 krzysztof:1 monotone:1 run:2 inverse:2 uncertainty:9 almost:1 decide:1 electronic:1 decision:9 duncan:1 bound:7 fl:1 display:1 refine:2 annual:1 constraint:6 fei:1 x2:2 sake:1 dominated:3 weld:1 argument:2 extremely:3 optimality:3 conversational:1 speedup:1 department:2 structured:1 combination:1 riedl:1 remain:1 contradicts:1 ur:1 wi:1 making:2 s1:2 intuitively:3 pr:7 computationally:6 ln:1 discus:4 eventually:1 tractable:1 serf:1 generalizes:1 gaussians:1 raiffa:1 appropriate:1 alternative:1 thomas:1 denotes:1 top:2 include:3 assumes:1 maintaining:1 exploit:1 k1:2 especially:2 murray:1 eliciting:1 objective:1 question:1 realized:1 strategy:14 mccarthy:1 behalf:1 nemhauser:1 unable:1 thank:1 poupart:1 sheppard:2 assuming:5 relationship:2 providing:1 equivalently:1 difficult:1 robert:2 stated:1 rise:1 herlocker:1 design:1 implementation:1 redefined:1 twenty:1 perform:1 allowing:2 recommender:6 upper:1 observation:2 regenerate:1 datasets:1 howard:1 finite:2 communication:1 precise:1 rn:1 arbitrary:1 thm:11 canada:1 david:1 required:3 kl:3 specified:1 optimized:1 connection:4 componentwise:1 continous:1 engine:1 deletion:1 pearl:1 able:3 elicitation:17 usually:2 below:6 ev:1 scott:1 summarize:1 tampa:1 including:2 max:16 belief:13 terry:1 greatest:3 critical:1 event:2 natural:4 rely:1 unrealistic:2 demanding:1 indicator:1 hybrid:1 sanner:1 minimax:2 improve:1 mdps:1 identifies:2 lk:1 categorical:1 naive:1 prior:2 sg:10 acknowledgement:1 discovery:1 review:1 vancouver:2 sardinia:1 relative:9 loss:14 mixed:2 suggestion:1 wolsey:1 filtering:3 localized:1 conjoint:3 degree:1 proxy:1 editor:1 austin:1 prone:1 cebly:1 supported:1 infeasible:3 formal:1 allow:3 bias:2 wide:1 slice:2 overcome:1 xn:1 stand:2 avoids:1 epu:47 numeric:1 dale:1 author:1 made:3 commonly:3 adaptive:3 evaluating:1 reinforcement:1 transaction:1 sj:1 approximate:2 olsen:1 preferred:14 sequentially:1 active:1 uai:3 assumed:1 recommending:2 xi:29 search:5 table:1 reasonably:1 ca:1 ignoring:1 schuurmans:1 posing:1 necessarily:1 constructing:1 domain:4 aistats:1 main:2 motivation:1 noise:5 s2:2 paul:1 child:1 x1:5 xu:1 site:1 tl:7 edmonton:1 gai:2 simester:1 wiley:2 explicit:2 wish:1 konstan:1 lw:2 third:1 pointless:1 theorem:5 specific:5 showing:1 maltz:1 dominates:1 restricting:1 adding:3 sequential:1 importance:1 pnl:1 conditioned:1 margin:1 nk:4 easier:1 boston:1 suited:1 entropy:2 simply:1 likely:1 twentieth:1 lazy:1 expressed:1 nserc:1 patrascu:1 recommendation:49 radford:1 corresponds:2 chance:1 relies:1 determines:1 acm:4 goal:2 presentation:1 sorted:1 price:1 fisher:1 feasible:5 hard:2 change:1 determined:2 except:1 reducing:2 lemma:7 principal:1 experimental:1 select:6 selectively:1 support:1 guo:1 artifical:1 evaluate:1 ex:1
3,250
3,944
Inter-time segment information sharing for non-homogeneous dynamic Bayesian networks Dirk Husmeier & Frank Dondelinger Biomathematics & Statistics Scotland (BioSS) JCMB, The King?s Buildings, Edinburgh EH93JZ, United Kingdom [email protected], [email protected] Sophie L`ebre Universit?e de Strasbourg, LSIIT - UMR 7005, 67412 Illkirch, France [email protected] Abstract Conventional dynamic Bayesian networks (DBNs) are based on the homogeneous Markov assumption, which is too restrictive in many practical applications. Various approaches to relax the homogeneity assumption have recently been proposed, allowing the network structure to change with time. However, unless time series are very long, this flexibility leads to the risk of overfitting and inflated inference uncertainty. In the present paper we investigate three regularization schemes based on inter-segment information sharing, choosing different prior distributions and different coupling schemes between nodes. We apply our method to gene expression time series obtained during the Drosophila life cycle, and compare the predicted segmentation with other state-of-the-art techniques. We conclude our evaluation with an application to synthetic biology, where the objective is to predict a known in vivo regulatory network of five genes in yeast. 1 Introduction There is currently considerable interest in structure learning of dynamic Bayesian networks (DBNs), with a variety of applications in signal processing and computational biology; see e.g. [1, 2, 3]. The standard assumption underlying DBNs is that time-series have been generated from a homogeneous Markov process. This assumption is too restrictive in many applications and can potentially lead to erroneous conclusions. While there have been various efforts to relax the homogeneity assumption for undirected graphical models [4, 5], relaxing this restriction in DBNs is a more recent research topic [1, 2, 3, 6, 7, 8]. At present, none of the proposed methods is without its limitations, leaving room for further methodological innovation. The method proposed in [3, 8] is non-Bayesian. This requires certain regularization parameters to be optimized ?externally?, by applying information criteria (like AIC or BIC), cross-validation or bootstrapping. The first approach is suboptimal, the latter approaches are computationally expensive 1 . In the present paper we therefore follow the Bayesian paradigm, like [1, 2, 6, 7]. These approaches also have their limitations. The method proposed in [2] assumes a fixed network structure and only allows the interaction parameters to vary with time. This assumption is too rigid when looking at processes where changes in the overall regulatory network structure are expected, e.g. in morphogenesis or embryogenesis. The method proposed in [1] requires a discretization of the data, which incurs an inevitable information loss. These limitations are addressed in [6, 7], where the authors propose a method for continuous data that allows network structures associated with different nodes to change with time in different ways. However, this high flexibility causes potential problems when applied to time series with a low number of measurements, as typically available from systems biology, leading to overfitting or inflated 1 See [9] for a demonstration of the higher computational costs of bootstrapping over Bayesian approaches based on MCMC. 1 inference uncertainty. The objective of the work described in our paper is to propose a model that addresses the principled shortcomings of the three Bayesian methods mentioned above. Unlike [1], our model is continuous and therefore avoids the information loss inherent in a discretization of the data. Unlike [2], our model allows the network structure to change among segments, leading to greater model flexibility. As an improvement on [6, 7], our model introduces information sharing among time series segments, which provides an essential regularization effect. 2 Background: non-homogeneous DBNs without information coupling This section summarizes briefly the non-homogeneous DBN proposed in [6, 7], which combines the Bayesian regression model of [10] with multiple changepoint processes and pursues Bayesian inference with reversible jump Markov chain Monte Carlo (RJMCMC) [11]. In what follows, we will refer to nodes as genes and to the network as a gene regulatory network. The method is not restricted to molecular systems biology, though. 2.1 Model Multiple changepoints: Let p be the number of observed genes, whose expression values y = {yi (t)}1?i?p,1?t?N are measured at N time points. M represents a directed graph, i.e. the network defined by a set of directed edges among the p genes. Mi is the subnetwork associated with target gene i, determined by the set of its parents (nodes with a directed edge feeding into gene i). The regulatory relationships among the genes, defined by M, may vary across time, which we model with a multiple changepoint process. For each target gene i, an unknown number ki of changepoints define ki + 1 non-overlapping segments. Segment h = 1, .., ki + 1 starts at changepoint ?ih?1 and stops before ?ih , where ?i = (?i0 , ..., ?ih?1 , ?ih , ..., ?iki +1 ) with ?ih?1 < ?ih . To delimit the bounds, ?i0 = 2 and ?iki +1 = N + 1. Thus vector ?i has length |?i | = ki + 2. The set of changepoints is denoted by ? = {?i }1?i?p . This changepoint process induces a partition of the time series, yih = (yi (t))?h?1?t<?h , with i i different structures Mhi associated with the different segments h ? {1, . . . , ki + 1}. Identifiability is satisfied by ordering the changepoints based on their position in the time series. Regression model: For all genes i, the random variable Yi (t) refers to the expression of gene i at time t. Within any segment h, the expression of gene i depends on the p gene expression values measured at the previous time point through a regression model defined by (a) a set of shi parents denoted by Mhi = {j1 , ..., jshi } ? {1, . . . , p}, |Mhi | = shi , and (b) a set of parameters ((ahij )j?0..p , ?ih ); ahij ? R, ?ih > 0. For all j 6= 0, ahij = 0 if j ? / Mhi . For all genes i, for all time points t in segment h h?1 h (?i ? t < ?i ), the random variable Yi (t) depends on the p variables {Yj (t ? 1)}1?j?p according to Yi (t) = ahi0 + X j?Mh i ahij Yj (t ? 1) + ?i (t) (1) where the noise ?i (t) is assumed to be Gaussian with mean 0 and variance (?ih )2 , ?i (t) ? N (0, (?ih )2 ). We define ahi = (ahij )j?0..p . 2.2 Prior The ki + 1 segments are delimited by ki changepoints, where ki is distributed a priori as a truncated k Poisson random variable with mean ? and maximum k = N ?2: P (ki |?) ? ?kii! 1l{ki ?k} . Conditional on ki changepoints, the changepoint positions vector ?i = (?i0 , ?i1 , ..., ?iki +1 ) takes non-overlapping integer values, which we take to be uniformly distributed a priori. There are ?(N ??2) possible positions for the ki changepoints, thus vector ?i has prior density P (?i |ki ) = 1/ N ?2 ki . For all genes i and all segments h, the number shi of parents for node i follows a truncated Poisson distribution2 with h ?si sh i! 1l{sh ?s} . Conditional on sh i , the prior for the parent set i ? h h Mi is a uniform distribution over all parent sets with cardinality si : P (Mhi ?|Mhi | = shi ) = 1/( psh ). mean ? and maximum s = 5: P (shi |?) ? The overall prior on the network structures is given by marginalization: P (Mhi |?) = Xs sh i =1 P (Mhi |shi )P (shi |?) i (2) 2 A restrictive Poisson prior encourages sparsity of the network, and is therefore comparable to a sparse exponential prior, or an approach based on the LASSO. 2 Conditional on the parent set Mhi of size shi , the shi + 1 regression coefficients, denoted by aMhi = (ahi0 , (ahij )j?Mh ), are assumed zero-mean multivariate Gaussian with covariance matrix (?ih )2 ?Mh , i i 0 1 P (ahi |Mhi , ?ih )=|2?(?ih )2 ?Mh |? 2 exp@? i a h a?Mh ??1 Mh M i i i 1 (3) A 2(?ih )2 ? where the symbol ? denotes matrix transposition, ?Mhi = ? ?2 DM h (y)DMh (y) and DMh (y) is the i i i (?ih ? ?ih?1 ) ? (shi + 1) matrix whose first column is a vector of 1 (for the constant in model (1)) and each (j + 1)th column contains the observed values (yj (t))?h?1 ?1?t<?h ?1 for all factor gene j i i in Mhi . This prior was also used in [10] and is motivated in [12]. Finally, the conjugate prior for the variance (?ih )2 is the inverse gamma distribution, P ((?ih )2 ) = IG(?0 , ?0 ). Following [6, 7], we set the hyper-hyperparameters for shape, ?0 = 0.5, and scale, ?0 = 0.05, to fixed values that give a vague distribution. The terms ? and ? can be interpreted as the expected number of changepoints and parents, respectively, and ? 2 is the expected signal-to-noise ratio. These hyperparameters are drawn from vague conjugate hyperpriors, which are in the (inverse) gamma distribution family: P (?) = P (?) = Ga(0.5, 1) and P (? 2 ) = IG(2, 0.2). 2.3 Posterior Equation (1) implies that h h?1 h h h h P (yi |?i , ?i , Mi , ai , ?i ) = ?? h 2??i ??(?h ??h?1 ) i i 0 exp @? (yih ? DMh(y)aMh )? (yih ? DMh(y)aMh ) i i i 2(?ih )2 i 1 A (4) From Bayes theorem, the posterior is given by the following equation, where all prior distributions have been defined above: P (k, ?, M, a, ?, ?, ?, ? 2 |y) ? P (? 2 )P (?)P (?) p Y i=1 P (ki |?)P (?i |ki ) ki Y P (Mhi |?) (5) h=1 P ([?ih ]2 )P (ahi |Mhi , [?ih ]2 , ? 2 )P (yih |?ih?1 , ?ih , Mhi , ahi , [?ih ]2 ) 2.4 Inference An attractive feature of the chosen model is that the marginalization over the parameters a and ? in the posterior distribution of (5) is analytically tractable: P (k,?,M,?,?,? 2|y) = Z P (k,?,M,a,?,?,?,? 2|y)dad? (6) See [6, 10] for details and an explicit expression. The number of changepoints and their location, k, ? , the network structure M and the hyperparameters ?, ?, ? 2 can be sampled from the posterior P (k, ?, M, ?, ?, ? 2 |y) with RJMCMC [11]. A detailed description can be found in [6, 10]. 3 Model improvement: information coupling between segments Allowing the network structure to change between segments leads to a highly flexible model. However, this approach faces a conceptual and a practical problem. The practical problem is potential model over-flexibility. If subsequent changepoints are close together, network structures have to be inferred from short time series segments. This will almost inevitably lead to overfitting (in a maximum likelihood context) or inflated inference uncertainty (in a Bayesian context). The conceptual problem is the underlying assumption that structures associated with different segments are a priori independent. This is not realistic. For instance, for the evolution of a gene regulatory network during embryogenesis, we would assume that the network evolves gradually and that networks associated with adjacent time intervals are a priori similar. To address these problems, we propose three methods of information sharing among time series segments, as illustrated in Figure 1. The first method is based on hard information coupling between the nodes, using the exponential distribution proposed in [13]. The second scheme is also based on hard information coupling, but uses a binomial distribution with conjugate Beta prior. The third scheme is based on the same distributional assumptions as the second scheme, but replaces the hard by a soft information coupling scheme. 3 (a) (b) Hard Node Coupling Soft Node Coupling Figure 1: Hierarchical Bayesian models for inter-segment and inter-node information coupling. 1(a): Hard coupling between nodes with common hyperparameter ? regulating the strength of the coupling between structures associated with adjacent segments, Mhi and Mh+1 . This corresponds to the models in Section 3.1, with i ? = ?, ? = [0, 10], and no ?, and Section 3.2, with ? = {a, b}, ? = {?, ?, ?, ?}, and ? = [0, 20]. 1(b): Soft coupling between nodes, with node-specific hyperparameters ?i coupled via level2-hyperparameters ?. This corresponds to the model in Section 3.3, with ?i = {ai , bi }, ? = {?, ?, ?, ?}, and ? = [0, 20]. 3.1 Hard information coupling based on an exponential prior Denote by Ki := ki + 1 the total number of partitions in the time series associated with node i, and recall that each time series segment yih is associated with a separate subnetwork Mhi , 1 ? h ? Ki . We impose a prior distribution P (Mhi |Mih?1 , ?) on the structures, and the joint probability distribution factorizes according to a Markovian dependence: i P (yi1 , . . . , yiKi , M1i , . . . , MK i , ?) = Ki Y P (yih |Mhi )P (Mhi |Mih?1 , ?)P (?) (7) h=1 Similar to [13] we define P (Mhi |Mih?1 , ?) = exp(??|Mhi ? Mih?1 |) Zi (?, Mih?1 ) (8) for h ? 2, where ? is a hyperparameter that defines the strength of the coupling between Mhi and Mih?1 , and |.| denotes the Hamming distance. For h = 1, P (Mhi ) is given by (2). The denominator Z(?, Mih?1 ) in (8) is a normalizing constant, also known as the partition function: P h?1 h Z(?) = Mh ?M e??|Mi ?Mi | where M is the set of all valid subnetwork structures. If we ignore i any fan-in restriction that might have been imposed a priori (via s), then the expression for the partih?1 h Q P tion function can be simplified: Z(?) ? pj=1 Zj (?), where Zj (?) = 1eh =0 e??|ej ?ej | = 1 + e?? j ?p and hence Z(?) = 1 + e?? . Inserting this expression into (8) gives: ` P (Mhi |Mih?1 , ?) = exp(??|Mhi ? Mih?1 |) (1 + e?? )p (9) It is straightforward to integrate the proposed model into the RJMCMC scheme of [6, 7] as described ? h for segment h, the prior in Section 2.4. When proposing a new network structure Mhi ? M i probability ratio has to be replaced by: ?h ,?)P (M ?h |Mh?1 ,?) P (Mh+1 |M i i i i h?1 h P (Mh+1 |Mh ,?) i i ,?)P (Mi |Mi . An additional MCMC step is introduced for sampling the hyperparameter ? from the posterior distribution. For a proposal move ? ? we get the following acceptance ? ? ?? with symmetric proposal probability Q(?|?) = Q(?|?) ? ? ? probability: A(?|?) = min ? P (?) P (?) Qp i=1 h?1 h ? exp(??|M |) i ?Mi h=2 exp(??|Mh ?Mh?1 |) i i QKi (1+e?? )p p,1 (1+e??? ) where in our study the hyperprior P (?) was chosen as the uniform distribution on the interval [0, 10]. 3.2 Hard information coupling based on a binomial prior An alternative way of information sharing among segments and nodes is by using a binomial prior: 0 1 0 1 P (Mhi |Mih?1 , a, b) = aN1 [h,i] (1 ? a)N1 [h,i] bN0 [h,i] (1 ? b)N0 [h,i] 4 (10) where we have defined the following sufficient statistics: N11 [h, i] is the number of edges in Mih?1 that are matched by an edge in Mhi , N10 [h, i] is the number of edges in Mih?1 for which there is no edge in Mhi , N01 [h, i] is the number of edges in Mhi for which there is no edge in Mih?1 , and N00 [h, i] is the number of coinciding non-edges in Mih?1 and Mhi . Since the hyperparameters are shared, the joint distribution can be expressed as: P ({Mhi }|a, b) = p Y P (M1i ) i=1 Ki Y 1 0 0 1 P (Mhi |Mih?1 , a, b) = aN1 (1 ? a)N1 bN0 (1 ? b)N0 Pp P (M1i ) (11) i=1 h=1 Nkl p Y PKi l h=2 Nk [h, i] = i=1 , and the right-hand side follows from Eq. (10). where we have defined The conjugate prior for the hyperparameters a, b is a beta distribution, P (a, b|?, ?, ?, ?) ? a(??1) (1 ? a)(??1) b(??1) (1 ? b)(??1) , which allows the hyperparameters to be integrated out in closed form: Z Z h P ({Mi }|?, ?, ?, ?) = P ({Mhi }|a, b)P (a, b|?, ?, ?, ?)dadb (12) ? ?(? + ?) ?(N11 + ?)?(N10 + ?) ?(? + ?) ?(N00 + ?)?(N01 + ?) ?(?)?(?) ?(N11 + ? + N10 + ?) ?(?)?(?) ?(N00 + ? + N01 + ?) The level-2 hyperparameters ?, ?, ?, ? are given a uniform hyperprior over [0, 20]. The MCMC scheme of Section 2.4 has to be modified as follows. When proposing a new network structure ? h enter the prior probability ? h , the structures Mh and M for node i and segment h, Mhi ? M i i i ratio via the expression P ({Mhi }|?, ?, ?, ?), as Ki p ?h P ({M1 i ,...,Mi ,...,Mi }i=1 |?,?,?,?) Ki p h P ({M1 i ,...,Mi ,...,Mi }i=1 |?,?,?,?) Note that as . a consequence of integrating out the hyperparameters, all network structures become interdependent, and information about the structures is contained in the sufficient statistics N11 , N10 , N01 , N00 . A new proposal move for the level-2 hyperparameters is added to the existing RJMCMC scheme of Section 2.4. New values for the level-2 hyperparameters x ? {?, ?, ?, ?} are proposed from a uniform distribution over a fixed interval. For ?, the acceptance probability is: ? a move x ? x ? A(? x|x) = min Ki p x) P ({M1 x,{?,?,?,?}\? i ,...,Mi }i=1 |? ,1 K p 1 P ({Mi ,...,Mi i }i=1 |x,{?,?,?,?}\x) where {?, ?, ?, ?} \ x corresponds to {?, ?, ?} if x designates hyperparameter ?, and similarly for ?, ? , ? . 3.3 Soft information coupling based on a binomial prior We can relax the information sharing scheme from a hard to a soft coupling by introducing node-specific hyperparameters ai , bi that are softly coupled via a common level-2 hyperprior, (??1) (??1) (1 ? ai )(??1) bi (1 ? bi )(??1) , as illustrated in Figure 1(b): P (ai , bi |?, ?, ?, ?) ? ai 1 0 0 1 P (Mhi |Mih?1 , ai , bi ) = (ai )N1 [h,i] (1 ? ai )N1 [h,i] (bi )N0 [h,i] (1 ? bi )N0 [h,i] (13) This leads to a straightforward modification of eq. (11) ? replacing a, b by ai , bi ? from which we PKi Nkl [h, i]: get as an equivalent to (13), using the definition Nkl [i] = h=2 1 K P (Mi , . . . , Mi i |?, ?, ?, ?) ? ?(? + ?) ?(N11 [i] + ?)?(N10 [i] + ?) ?(? + ?) ?(N00 [i] + ?)?(N01 [i] + ?) ?(?)?(?) ?(N11 [i] + ? + N10 [i] + ?) ?(?)?(?) ?(N00 [i] + ? + N01 [i] + ?) (14) As in Section 3.2, we extend the RJMCMC scheme from Section 2.4 so that when proposing a new ? h , the acceptance probability has to be updated with the prior ratio: network structure, Mhi ? M i ?h i P (M1 i ,...,Mi ,...,Mi |?,?,?,?) . h ,...,MKi |?,?,?,?) P (M1 ,...,M i i i K In addition, we have to add a new level-2 hyperparameter update move x?x ?, where the prior and proposal?probabilities are the same as in Section ? 3.2, and the acceptance probability becomes: A(? x|x) = min 4 K i x,{?,?,?,?}\? x) P (M1 i ,...,Mi |? i=1 P (M1 ,...,MKi |x,{?,?,?,?}\x) , 1 i i Qp . Results The methods described in this paper have been implemented in R, based on code from [6, 7]. Our program sets up an RJMCMC simulation to sample the network structure, the changepoints and the hyperparameters from the posterior distribution. As a convergence diagnostic we monitor the potential scale reduction factor (PSRF) [14], computed from the within-chain and between-chain variances of marginal edge posterior probabilities. Values of PSRF?1.1 are usually taken as indication of sufficient convergence. In our simulations, we extended the burn-in phase until a value of 5 1.0 0.0 0.2 AUPRC Score 0.4 0.6 0.8 1.0 AUROC Score 0.4 0.6 0.8 0.2 0.0 Same Segs (a) Different Segs Same Segs (b) AUROC Score Comparison Different Segs AUPRC Score Comparison Figure 2: Network reconstruction performance comparison of AUROC and AUPRC reconstruction scores for the four methods, HetDBN-0 (white), HetDBN-Exp (light grey), HetDBN-Bino1 (dark grey, left), HetDBN-Bino2 (dark grey, right). The boxplots show the distributions of the scores for 10 datasets with 4 network segments each, where the horizontal bar shows the median, the box margins show the 25th and 75th percentiles, the whiskers indicate data within 2 times the interquartile range, and circles are outliers. ?Same Segs? means that all segments in a dataset have the same structure, while ?Different Segs? indicates that structure changes are applied to the segments sequentially. PSRF? 1.05 was reached, and then sampled 1000 network and changepoint configurations in intervals of 200 RJMCMC steps. From these samples we compute the marginal posterior probabilities of all potential interactions, which defines a ranking of the edges in the recovered network. When the true network is known, this allows us to construct the Receiver Operating Characteristic (ROC) curve (plotting the sensitivity or recall against the complementary specificity) and the precisionrecall (PR) curve (plotting the precision against the recall), and to assess the network reconstruction accuracy in terms of the areas under these graphs (AUROC and AUPRC, respectively); see [15]. 4.1 Comparative evaluation on simulated data We randomly generated 10 networks with 10 nodes each, with the number of parents per node drawn from a Poisson distribution with mean ? = 3. To simulate changes in the network structure, we created 4 different network segments by drawing the number of changes from a Poisson distribution and applying the changes uniformly at random to edges and non-edges in the previous segment. For each segment, we generated a time series of length 15 using a linear regression model. The regression weights were drawn from a Gaussian N (0, 1), and Gaussian observation noise N (0, 1) was added. We compared the network reconstruction accuracy of the non-homogeneous DBN without information sharing proposed in [6, 7] (HetDBN-0) with the three information sharing approaches, based on the exponential prior from Section 3.1 (HetDBN-Exp), the binomial prior with hard node coupling from Section 3.2 (HetDBN-Bino1), and the binomial prior with soft node coupling from Section 3.3 (HetDBN-Bino2). Figures 2(a) and 2(b) shows the network reconstruction performance of the different information sharing methods in terms of AUROC and AUPRC scores. All information sharing methods show a clear improvement in network reconstruction over HetDBN-0, as confirmed by paired t-tests (p < 0.01). We investigated two different situations, the case where all segment structures are the same (although edge weights are allowed to vary) and the case where changes are applied sequentially to the segments3 . Information sharing is most beneficial for the first case, but even when we introduce changes we still see an increase in the network reconstruction scores compared to HetDBN-0. When all segments are the same, HetDBN-Bino1 and HetDBNBino2 outperform HetDBN-Exp (p < 0.05), but there is no significant difference between the two binomial methods. Paired t-tests showed that all other differences in mean are significant. When the segments are different, all information sharing methods outperform HetDBN-0 (p < 0.05), but the difference between the information sharing methods is not significant. 4.2 Morphogenesis in Drosophila melanogaster We applied our methods to a gene expression time series for eleven genes involved in the muscle development of Drosophila melanogaster [16]. The microarray data measured gene expression levels during all four major stages of morphogenesis: embryo, larva, pupa and adult. We investigated whether our methods were able to infer the correct changepoints corresponding to the known transitions between stages. Figure 3(a) shows the posterior probabilities of inferred changepoints for any gene using HetDBN-0, while Figure 3(c) shows the posterior probabilities for the information shar3 We chose to draw the number of changes from a Poisson with mean 1 for each node. 6 Regression Parameter Difference 10 20 30 40 50 1.0 Posterior Probability 0.4 0.6 0.8 0 0.2 0.0 10 1.0 (a) 20 30 40 Timepoints 50 0 60 (b) 20 30 40 Timepoints 50 60 Drosophila CPs with TESLA 0.2 Posterior Probability 0.4 0.6 0.8 HetDBN?0 HetDBN?Exp HetDBN?Bino1 HetDBN?Bino2 0.0 0.0 0.2 Posterior Probability 0.4 0.6 0.8 HetDBN?Exp HetDBN?Bino1 HetDBN?Bino2 0 (c) 10 Drosophila CPs with HetDBN-0 1.0 0 10 20 30 40 Timepoints 50 60 0 Drosophila CPs with HetDBN-Exp and HetDBN-Bino 5 (d) 10 15 20 Timepoints 25 30 35 Synthetic Network CPs with HetDBN Figure 3: Changepoints inferred on gene expression data related to morphogenesis in Drosophila melanogaster, and synthetic biology in Saccharomyces cerevisiae (yeast). All figures using HetDBN plot the posterior probability of a changepoint occurring for any node at a given time plotted against time. 3(a): HetDBN-0 changepoints for Drosophila (no information sharing) 3(b): TESLA, L1norm of the difference of the regression parameter vectors associated with two adjacent time points plotted against time. 3(c): HetDBN changepoints for Drosophila with information sharing; the method is indicated by the legend. 3(d) HetDBN changepoints for the synthetic gene regulatory network in yeast. In 3(a)-3(c), the vertical dotted lines indicate the three morphogenic transitions, while in 3(d) the line indicates the boundary between ?switch on? and ?switch off? data. ing methods. For comparison, we applied the method proposed in [3], using the authors? software package TESLA (Figure 3(b)). Robinson and Hartemink applied the discrete non-homogeneous DBN in [1] to the same data set, and a plot corresponding to Figure 3(b) can be found in their paper. Our non-homogeneous DBN methods are generally more successful than TESLA, in that they recover changepoints for all three transitions (embryo ? larva, larva ? pupa, and pupa ? adult). Figure 3(b) indicates that the last transition, pupa ? adult, is less clearly detected with TESLA, and it is completely missing in [1]. Both our method as well as TESLA detect additional transitions during the embryo stage, which are missing in [1]. We would argue that a complex gene regulatory network is unlikely to transition into a new morphogenic phase all at once, and some pathways might have to undergo activational changes earlier in preparation for the morphogenic transition. As such, it is not implausible that additional transitions at the gene regulatory network level occur. However, a failure to detect known morphogenic transitions can clearly be seen as a shortcoming of a method, and on these grounds our model appears to outperform the two alternative ones. We note that the main effect of information sharing is to reduce the size of the smaller peaks, while keeping the three most salient peaks (corresponding to larva ? pupa, and pupa ? adult, and an extra transition in the embryo phase). This reflects the fact that these changepoints are associated with significant changes in network structure, and adds to the interpretability of the results. The drawback is that the third morphological transition (embryo ? larva) is less pronounced. 4.3 Reconstruction of a synthetic gene regulatory network in Saccharomyces cerevisiae The highly topical field of synthetic biology enables biologists to design known gene regulatory networks in living cells. In the work described in [17], a synthetic regulatory network of 5 genes was constructed in Saccharomyces cerevisiae (yeast), and gene expression time series were measured with RT-PCR for 16 and 21 time points under two experimental conditions, related to the carbon source: galactose (?switch on?) and glucose (?switch off?). The authors tried to reconstruct the known gold-standard network from these time series with two established state-of-the-art methods from computational systems biology, one based on ordinary differential equations (ODEs), called 7 Precision?Recall for Switch Off 1.0 0.8 0.8 1.0 Precision?Recall for Switch On Banjo and TSNI Precision 0.4 0.6 Precision 0.4 0.6 TSNI 0.2 0.2 Banjo 0.0 0.2 HetDBN?0 HetDBN?Exp HetDBN?Bino1 HetDBN?Bino2 0.0 0.0 HetDBN?0 HetDBN?Exp HetDBN?Bino1 HetDBN?Bino2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Recall Recall Figure 4: Reconstruction of a known gene regulatory network from synthetic biology in yeast. The network was reconstructed from two gene expression time series obtained with RT-PCR in two experimental conditions, reflecting the switch in the carbon source from galactose (?switch on?) to glucose (?switch off?). The reconstruction accuracy of the methods proposed in Section 3, where the legend is explained, is shown in terms of precision (vertical axis) - recall (horizontal axis) curves. Results were averaged over 10 independent MCMC simulations. For comparison, fixed precision/recall scores are shown for two state-of-the-art methods reported in [17]: Banjo, a conventional DBN, and TSNI, a method based on ODEs. TSNI, the other based on conventional DBNs, called Banjo; see [17] for details. Both methods are optimization-based and output a single network. By comparison with the known gold standard, the authors obtained the precision (proportion of predicted interactions that are correct) and recall (proportion of predicted true interactions) scores. In our study, we merged the time series from the two experimental conditions under exclusion of the boundary point4 , and applied the four nonhomogeneous DBNs described before. Figure 3(d) shows the inferred marginal posterior probability of potential changepoints. The most significant changepoint is at the boundary between ?switch on? and ?switch off? data, confirming that the known true changepoint is consistently identified. The biological mechanism behind the other peaks is not known, and they are potentially spurious. Interestingly, the application of the proposed information-coupling schemes reduces the height of these peaks, with the binomial models having a stronger effect than the exponential one. As we pursue a Bayesian inference scheme, we also obtain a ranking of the potential gene interactions in terms of their marginal posterior probabilities. From this we computed the precision-recall curves [15] shown in Figure 4. Our non-homogeneous DBNs with information sharing outperform Banjo and TSNI both in the ?switch on? and the ?switch off? phase. They also perform better than HetDBN-0 on the ?switch off? data, but are slightly worse on the ?switch on? data. Note that the reconstruction accuracy on the ?switch off? data is generally poorer than on the ?switch on? data [17]. Our results are thus plausible, suggesting that information sharing boosts the reconstruction accuracy on the poorer time series segment at the cost of a degraded performance on the stronger one. This effect is more pronounced for the exponential prior than for the binomial one, indicating a tighter coupling. The average areas under the PR curves, averaged over both phases (?switch on and off?), are as follows. HetDBN-0= 0.70, HetDBN-Exp= 0.77, HetDBN-Bino1= 0.75, HetDBNBino2= 0.75. Hence, the overall effect of information sharing is a performance improvement. 5 Conclusions We have described a non-homogeneous DBN, which has various advantages over existing schemes: it does not require the data to be discretized (as opposed to [1]); it allows the network structure to change with time (as opposed to [2]); it includes three different regularization schemes based on inter-time segment information sharing (as opposed to [6, 7]); and it allows all hyperparameters to be inferred from the data via a consistent Bayesian inference scheme (as opposed to [3]). An evaluation on simulated data has demonstrated an improved performance over [6, 7] when information sharing is introduced. The application of our method to gene expression time series taken during the life cycle of Drosophila melanogaster has revealed better agreement with known morphogenic transitions than the methods of [1] and [3]. We have carried out a comparative evaluation of different information coupling schemes: a binomial versus an exponential prior, and hard versus soft coupling. In an application to data from a topical study in synthetic biology, our methods have outperformed two established network reconstruction methods from computational systems biology. 4 When merging two time series (x1 , . . . , xm ) and (y1 , . . . , yn ), only the pairs xi ? xj and yi ? yj are presented to the DBN, while the pair xm ? y1 is excluded due to the obvious discontinuity. 8 References [1] J. W. Robinson and A. J. Hartemink. Non-stationary dynamic Bayesian networks. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems (NIPS), volume 21, pages 1369?1376. Morgan Kaufmann Publishers, 2009. [2] M. Grzegorczyk and D. Husmeier. Non-stationary continuous dynamic Bayesian networks. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems (NIPS), volume 22, pages 682?690. 2009. [3] A. Ahmed and E. P. Xing. Recovering time-varying networks of dependencies in social and biological studies. Proceedings of the National Academy of Sciences, 106:11878?11883, 2009. [4] M. Talih and N. Hengartner. Structural learning with time-varying components: Tracking the cross-section of financial time series. Journal of the Royal Statistical Society B, 67(3):321?341, 2005. [5] X. Xuan and K. Murphy. Modeling changing dependency structure in multivariate time series. In Zoubin Ghahramani, editor, Proceedings of the 24th Annual International Conference on Machine Learning (ICML 2007), pages 1055?1062. Omnipress, 2007. [6] S. L`ebre. Stochastic process analysis for Genomics and Dynamic Bayesian Networks inference. PhD thesis, Universit?e d?Evry-Val-d?Essonne, France, 2007. [7] S. L`ebre, J. Becq, F. Devaux, G. Lelandais, and M.P.H. Stumpf. Statistical inference of the time-varying structure of gene-regulation networks. BMC Systems Biology, 4(130), 2010. [8] M. Kolar, L. Song, and E. Xing. Sparsistent learning of varying-coefficient models with structural changes. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems (NIPS), volume 22, pages 1006? 1014. 2009. [9] B. Larget and D. L. Simon. Markov chain Monte Carlo algorithms for the Bayesian analysis of phylogenetic trees. Molecular Biology and Evolution, 16(6):750?759, 1999. [10] C. Andrieu and A. Doucet. Joint Bayesian model selection and estimation of noisy sinusoids via reversible jump MCMC. IEEE Transactions on Signal Processing, 47(10):2667?2676, 1999. [11] P. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82:711?732, 1995. [12] A. Zellner. On assessing prior distributions and Bayesian regression analysis with g-prior distributions. In P. Goel and A. Zellner, editors, Bayesian Inference and Decision Techniques, pages 233?243. Elsevier, 1986. [13] A. V. Werhli and D. Husmeier. Gene regulatory network reconstruction by Bayesian integration of prior knowledge and/or different experimental conditions. Journal of Bioinformatics and Computational Biology, 6(3):543?572, 2008. [14] A. Gelman and D.B. Rubin. Inference from iterative simulation using multiple sequences. Statistical science, 7(4):457?472, 1992. [15] J. Davis and M. Goadrich. The relationship between precision-recall and ROC curves. In Proceedings of the 23rd international conference on Machine Learning, page 240. ACM, 2006. [16] M.N. Arbeitman, E.E.M. Furlong, F. Imam, E. Johnson, B.H. Null, B.S. Baker, M.A. Krasnow, M.P. Scott, R.W. Davis, and K.P. White. Gene expression during the life cycle of Drosophila melanogaster. Science, 297(5590):2270?2275, 2002. [17] I. Cantone, L. Marucci, F. Iorio, M. A Ricci, V. Belcastro, M. Bansal, S. Santini, M. di Bernardo, D. di Bernardo, and M. P Cosma. A yeast synthetic network for in vivo assessment of reverse-engineering and modeling approaches. Cell, 137(1):172181, 2009. 9
3944 |@word briefly:1 proportion:2 stronger:2 grey:3 iki:3 simulation:4 tried:1 covariance:1 incurs:1 yih:6 biomathematics:1 reduction:1 configuration:1 series:22 contains:1 united:1 score:10 interestingly:1 existing:2 recovered:1 discretization:2 si:2 subsequent:1 partition:3 j1:1 realistic:1 shape:1 eleven:1 enables:1 confirming:1 plot:2 update:1 n0:4 stationary:2 yi1:1 scotland:1 short:1 transposition:1 provides:1 node:22 location:1 five:1 height:1 phylogenetic:1 constructed:1 beta:2 become:1 differential:1 combine:1 pathway:1 introduce:1 ahij:6 inter:5 expected:3 mih:16 discretized:1 cardinality:1 becomes:1 underlying:2 matched:1 baker:1 null:1 what:1 interpreted:1 pursue:1 proposing:3 bootstrapping:2 bernardo:2 universit:2 biometrika:1 uk:2 yn:1 before:2 engineering:1 consequence:1 might:2 burn:1 umr:1 chose:1 relaxing:1 bi:9 range:1 averaged:2 directed:3 practical:3 yj:4 precisionrecall:1 area:2 integrating:1 refers:1 auprc:5 specificity:1 zoubin:1 get:2 ga:1 close:1 selection:1 gelman:1 risk:1 applying:2 context:2 sparsistent:1 restriction:2 conventional:3 imposed:1 equivalent:1 shi:10 pursues:1 missing:2 straightforward:2 delimit:1 demonstrated:1 williams:2 bn0:2 goadrich:1 financial:1 updated:1 dbns:8 target:2 homogeneous:10 us:1 agreement:1 expensive:1 distributional:1 observed:2 culotta:2 cycle:3 distribution2:1 morphological:1 ordering:1 principled:1 mentioned:1 m1i:3 dynamic:6 segment:33 completely:1 vague:2 iorio:1 mh:15 joint:3 various:3 shortcoming:2 monte:3 detected:1 hyper:1 choosing:1 whose:2 plausible:1 relax:3 drawing:1 reconstruct:1 statistic:3 noisy:1 advantage:1 indication:1 sequence:1 propose:3 reconstruction:14 interaction:5 fr:1 inserting:1 flexibility:4 gold:2 academy:1 description:1 pronounced:2 parent:8 convergence:2 psh:1 zellner:2 assessing:1 comparative:2 xuan:1 coupling:23 ac:2 pupa:6 measured:4 dmh:4 eq:2 implemented:1 predicted:3 recovering:1 implies:1 indicate:2 inflated:3 grzegorczyk:1 drawback:1 correct:2 merged:1 stochastic:1 kii:1 require:1 feeding:1 imam:1 ricci:1 drosophila:11 mki:2 biological:2 tighter:1 ground:1 exp:15 predict:1 changepoint:9 major:1 vary:3 estimation:1 outperformed:1 currently:1 reflects:1 clearly:2 gaussian:4 cerevisiae:3 pki:2 modified:1 ej:2 ebre:3 factorizes:1 varying:4 improvement:4 methodological:1 saccharomyces:3 likelihood:1 indicates:3 consistently:1 detect:2 elsevier:1 inference:11 rigid:1 cnrs:1 i0:3 softly:1 typically:1 integrated:1 unlikely:1 spurious:1 koller:1 france:2 embryogenesis:2 i1:1 overall:3 among:6 flexible:1 denoted:3 priori:5 development:1 art:3 integration:1 biologist:1 marginal:4 field:1 construct:1 once:1 having:1 sampling:1 biology:13 represents:1 bmc:1 icml:1 inevitable:1 mhi:39 inherent:1 randomly:1 gamma:2 national:1 homogeneity:2 murphy:1 replaced:1 phase:5 n1:4 bioss:3 interest:1 regulating:1 acceptance:4 investigate:1 highly:2 interquartile:1 galactose:2 evaluation:4 introduces:1 sh:4 light:1 behind:1 chain:5 poorer:2 edge:14 unless:1 tree:1 hyperprior:3 circle:1 plotted:2 mk:1 instance:1 column:2 soft:7 earlier:1 markovian:1 modeling:2 nkl:3 ordinary:1 cost:2 introducing:1 uniform:4 successful:1 johnson:1 too:3 reported:1 dependency:2 synthetic:10 density:1 peak:4 sensitivity:1 international:2 off:9 together:1 thesis:1 satisfied:1 opposed:4 worse:1 cosma:1 leading:2 suggesting:1 potential:6 de:1 includes:1 coefficient:2 ranking:2 depends:2 tion:1 yiki:1 dad:1 closed:1 reached:1 start:1 bayes:1 level2:1 recover:1 xing:2 identifiability:1 simon:1 vivo:2 ass:1 accuracy:5 degraded:1 variance:3 characteristic:1 kaufmann:1 bayesian:22 none:1 carlo:3 confirmed:1 n10:6 implausible:1 sharing:21 definition:1 against:4 failure:1 pp:1 involved:1 obvious:1 dm:1 associated:10 mi:21 di:2 hamming:1 stop:1 sampled:2 dataset:1 recall:12 knowledge:1 segmentation:1 reflecting:1 appears:1 higher:1 delimited:1 follow:1 coinciding:1 improved:1 rjmcmc:7 though:1 box:1 stumpf:1 stage:3 until:1 hand:1 horizontal:2 replacing:1 overlapping:2 reversible:3 l1norm:1 evry:1 assessment:1 defines:2 indicated:1 yeast:6 building:1 effect:5 true:3 evolution:2 regularization:4 analytically:1 hence:2 excluded:1 symmetric:1 andrieu:1 sinusoid:1 illustrated:2 white:2 attractive:1 adjacent:3 during:6 encourages:1 davis:2 percentile:1 criterion:1 bansal:1 omnipress:1 recently:1 common:2 jshi:1 qp:2 volume:3 extend:1 larva:5 m1:7 measurement:1 refer:1 significant:5 glucose:2 ai:10 enter:1 rd:1 dbn:7 similarly:1 operating:1 add:2 multivariate:2 posterior:16 recent:1 showed:1 exclusion:1 reverse:1 certain:1 life:3 santini:1 yi:7 muscle:1 seen:1 morgan:1 greater:1 additional:3 impose:1 goel:1 husmeier:3 paradigm:1 signal:3 living:1 multiple:4 infer:1 reduces:1 ing:1 determination:1 ahmed:1 cross:2 long:1 molecular:2 n11:6 paired:2 regression:9 n01:6 denominator:1 poisson:6 cell:2 proposal:4 background:1 addition:1 cps:4 ode:2 addressed:1 interval:4 median:1 leaving:1 microarray:1 source:2 publisher:1 extra:1 unlike:2 undergo:1 strasbourg:1 undirected:1 legend:2 lafferty:2 integer:1 structural:2 revealed:1 bengio:3 variety:1 marginalization:2 bic:1 zi:1 switch:18 xj:1 lasso:1 suboptimal:1 identified:1 reduce:1 whether:1 expression:16 motivated:1 effort:1 song:1 cause:1 generally:2 detailed:1 clear:1 dark:2 induces:1 outperform:4 zj:2 dotted:1 diagnostic:1 per:1 discrete:1 hyperparameter:5 four:3 salient:1 hengartner:1 monitor:1 drawn:3 changing:1 pj:1 boxplots:1 graph:2 inverse:2 package:1 uncertainty:3 family:1 almost:1 draw:1 decision:1 summarizes:1 comparable:1 ki:25 bound:1 aic:1 fan:1 replaces:1 annual:1 strength:2 occur:1 software:1 amh:2 simulate:1 min:3 essonne:1 according:2 conjugate:4 across:1 beneficial:1 smaller:1 slightly:1 evolves:1 modification:1 explained:1 outlier:1 restricted:1 gradually:1 pr:2 embryo:5 taken:2 computationally:1 equation:3 mechanism:1 tractable:1 available:1 changepoints:20 apply:1 hyperpriors:1 hierarchical:1 alternative:2 assumes:1 denotes:2 binomial:10 graphical:1 restrictive:3 ghahramani:1 society:1 objective:2 move:4 added:2 dependence:1 rt:2 subnetwork:3 distance:1 separate:1 simulated:2 topic:1 argue:1 length:2 code:1 relationship:2 ratio:4 demonstration:1 kolar:1 innovation:1 kingdom:1 regulation:1 potentially:2 frank:2 carbon:2 design:1 unknown:1 perform:1 allowing:2 vertical:2 observation:1 markov:5 datasets:1 inevitably:1 truncated:2 situation:1 extended:1 looking:1 dirk:2 topical:2 y1:2 morphogenesis:4 inferred:5 introduced:2 pair:2 optimized:1 established:2 boost:1 nip:3 discontinuity:1 address:2 adult:4 bar:1 able:1 usually:1 robinson:2 xm:2 scott:1 sparsity:1 program:1 interpretability:1 pcr:2 royal:1 green:1 eh:1 scheme:17 axis:2 qki:1 created:1 carried:1 coupled:2 genomics:1 prior:29 interdependent:1 val:1 loss:2 whisker:1 limitation:3 krasnow:1 versus:2 validation:1 integrate:1 sufficient:3 consistent:1 rubin:1 plotting:2 editor:5 last:1 keeping:1 side:1 face:1 sparse:1 edinburgh:1 distributed:2 curve:6 boundary:3 talih:1 valid:1 avoids:1 transition:12 author:4 jump:3 ig:2 simplified:1 social:1 transaction:1 melanogaster:5 reconstructed:1 ignore:1 dondelinger:1 gene:37 psrf:3 overfitting:3 sequentially:2 doucet:1 conceptual:2 receiver:1 conclude:1 assumed:2 xi:1 continuous:3 regulatory:13 designates:1 iterative:1 an1:2 nonhomogeneous:1 schuurmans:3 investigated:2 complex:1 bottou:1 ahi:4 main:1 noise:3 hyperparameters:15 allowed:1 complementary:1 tesla:6 x1:1 roc:2 precision:10 position:3 explicit:1 timepoints:4 exponential:7 third:2 externally:1 theorem:1 erroneous:1 specific:2 symbol:1 x:1 auroc:5 normalizing:1 essential:1 ih:24 merging:1 n00:6 phd:1 occurring:1 margin:1 nk:1 expressed:1 contained:1 hartemink:2 tracking:1 furlong:1 corresponds:3 acm:1 conditional:3 king:1 room:1 shared:1 considerable:1 change:16 hard:10 determined:1 uniformly:2 sophie:2 total:1 called:2 experimental:4 indicating:1 latter:1 bioinformatics:1 preparation:1 mcmc:5
3,251
3,945
Evaluation of Rarity of Fingerprints in Forensics Chang Su and Sargur Srihari Department of Computer Science and Engineering University at Buffalo Amherst, NY 14260 {changsu,srihari}@buffalo.edu Abstract A method for computing the rarity of latent fingerprints represented by minutiae is given. It allows determining the probability of finding a match for an evidence print in a database of n known prints. The probability of random correspondence between evidence and database is determined in three procedural steps. In the registration step the latent print is aligned by finding its core point; which is done using a procedure based on a machine learning approach based on Gaussian processes. In the evidence probability evaluation step a generative model based on Bayesian networks is used to determine the probability of the evidence; it takes into account both the dependency of each minutia on nearby minutiae and the confidence of their presence in the evidence. In the specific probability of random correspondence step the evidence probability is used to determine the probability of match among n for a given tolerance; the last evaluation is similar to the birthday correspondence probability for a specific birthday. The generative model is validated using a goodness-of-fit test evaluated with a standard database of fingerprints. The probability of random correspondence for several latent fingerprints are evaluated for varying numbers of minutiae. 1 Introduction In many forensic domains it is necessary to characterize the degree to which a given piece of evidence is unique. For instance in the case of DNA a probability statement is made after a match has been confirmed between the evidence and the known, that the chance that a randomly selected person would have the same DNA pattern is 1 in 24,000,000 which is a description of rarity of the evidence/known [1]. In the case of fingerprint evidence there is uncertainty at two levels: the similarity between the evidence and the known and the rarity of the known. This paper explores the evaluation of the rarity of a fingerprint as characterized by a given set of features. Recent court challenges have highlighted the need for statistical research on this problem especially if it is stated that a high degree of similarity is present between the evidence and the known [2]. A statistical measure of the weight of evidence in forensics is a likelihood ratio (LR) defined as follows [3]. It is the ratio between the joint probability that the evidence and known come from the same source, and the joint probability that the two come from two different sources. If the underlying distributions are Gaussian the LR can be simplified as the product of two exponential factors: the first is a significance test of the null hypothesis of identity, and the second measures rarity. Since evaluation of the joint probability is difficult for fingerprints, which are characterized by variable sets of minutia points with each point itself expressed as a 3-tuple of spatial co-ordinates and an angle, the LR computation is usually replaced by one wherein a similarity (or kernel) function is introduced between the evidence and the known and the likelihood ratio is computed for the similarity [4, 5]. While such efforts concern the significance of the null hypothesis of identity, fingerprint rarity continues to be a difficult problem and has never been solved. This paper describes a systematic approach for the computation of the rarity of fingerprints in a robust and reliable manner. 1 The process involves several individual steps. Due to varying quality of fingerprints collected from the crime scene, called latent prints, a registration process is needed to determine which area of finger skin the print comes from; Section 2 describes the use of Gaussian processes to predict core points by which prints can be aligned. In Section 3 a generative model based on Bayesian networks is proposed to model the distribution of minutiae as well as the dependencies between them. To measure rarity, a metric for assessing the probability of random correspondence of a specific print against n samples is defined in Section 4. The model is validated using a goodness-of-fit test in Section 5. Some examples of evaluation of rarity are given in Section 6. 2 Fingerprint Registration The fingerprint collected from the crime scene is usually only a small portion of the complete fingerprint. So the feature set extracted from the print only contains relative spatial relationship. It?s obvious that feature sets with same relative spatial relationship can lead to different rarity if they come from the different areas of the fingertip. To solve this problem, we first predict the core points and then align the fingerprints by overlapping their core points. In biometrics and fingerprint analysis, core point refers to the center area of a fingerprint. In practice, the core point corresponds to the center of the north most loop type singularity. For fingerprints that do not contain loop or whorl singularities, the core is usually associated with the point of maxima ridge line curvature[6]. The most popular approaches proposed for core point detection is the Poincare Index (PI) which is developed by [7, 8, 9]. Another commonly used method [10] is a sine map based method that is realized by multi-resolution analysis. The methods based on Fourier expansion[11], fingerprint structures [12] and multi-scale analysis [13] are also proposed. All of these methods require that the fingerprints are complete and the core points can be seen in the prints. But this is not the case for all the fingerprints. Latent prints are usually small partial prints and do not contain core points. So there?s no way to detect them by above computational vision based approaches. We proposes a core point prediction approach that turns this problem into a regression problem. Since the ridge flow directions reveal the intrinsic features of ridge topologies, and thus have critical impact on core point prediction. The orientation maps are used to predict the core points. A fingerprint field orientation map is defined as a collection of two-dimensional direction fields. It represents the directions of ridge flows in regular spaced grids. The gradients of gray intensity of enhanced fingerprints are estimated to obtain reliable ridge orientation [9]. Given an orientation map of a fingerprint, the core point is predicted using Gaussian processes. Gaussian processes dispense with the parametric model and instead define a probability distribution over functions directly. It provides more flexibility and better prediction. The advantage of Gaussian process model also comes from the probabilistic formulation[14]. Instead of representing the core point as a single value, the predication of the core point from Gaussian process model takes the form of a full predictive distribution. Suppose we have a training set D of N fingerprints, D = {(gi , yi )|i = 1, . . . , N }, where g denotes the orientation map of a fingerprint print and y denotes the output which is the core point. In order to predict the core points, Gaussian process model with squared exponential covariance function is applied. The regression model with Gaussian noise is given by y = f (g) + ? (1) where f (g) is the value of the process or function f (x) at g and ? is a random noise variable whose value is chosen independent for each observation. We consider the noise processes that have a Gaussian distribution, so that the Gaussian likelihood for core point is given by p(y|f (g)) = N (f , ? 2 I) (2) where ? 2 is the variance of the noise. From the definition of a Gaussian process, the Gaussian process prior is given by a Gaussian whose mean is zero and whose covariance is defined by a covariance function k(g, g? ) so that f (g) ? GP(0, k(g, g? )) (3) The squared exponential covariance function is used here to specify the covariance between pairs of variables, parameterized by ?1 and ?2 . ?2 k(g, g? ) = ?1 exp(? |g ? g? |2 ) (4) 2 2 where the hyperparameters ?1 and ?2 are optimized by maximizing of the log likelihood p(y|?1 , ?2 ) Suppose the orientation map of a input fingerprint is given by g? . The Gaussian predictive distribution of core point y ? can be evaluated by conditioning the joint Gaussian prior distribution on the observation (G, y), where G = (g1 , . . . , gN )? and y = (y1 , . . . , yN )? . The predictive distribution is given by p(y ? |g? , G, y) = N (m(y ? ), cov(y ? )) (5) where m(y ? ) = k(g? , G)[K + ? 2 I]?1 y (6) cov(y ? ) = k(g? , g? ) + ? 2 ? k(g? , G)? [K + ? 2 I]?1 k(G, g? ) where K is the Gram matrix whose elements are given by k(gi , gj ). (7) Note that for some fingerprints such as latent fingerprints collected from crime scene, their locations in the complete print are unknown. So any g? only represents the orientation map of the print in one possible location. In order to predict the core point in the correct location, we list all the possible print locations corresponding to the different translations and rotations. The orientation maps of them are defined as G = {gi? |i = 1, . . . , m}. Using (5), we obtain the predictive distributions p(y ? |gi? , G, y) for all the gi? . The core point y?? should maximize p(y ? |gi? , G, y) with respect to gi? . Thus the core point of the fingerprint is given by ? 2 ?1 y?? = k(gM y AX , G)[K + ? I] (8) ? gM AX where is the orientation map where the maximum predictive probability of core point can be obtained, given by ? ? ? gM (9) AX = argmax p(m(y )|g , G, y) g? After the core points are determined, the fingerprints can be aligned by overlapping their core points. This is done by presenting the features in the Cartesian coordinates where the origin is the core point. Note that the minutia features mentioned in following sections have been aligned first. 3 A Generative Model for Fingerprints In order to estimate rarity, statistical models need to be developed to represent the distribution of fingerprint features. Previous generative models for fingerprints involve different assumptions: uniform distribution of minutia locations and directions [15] and minutiae are independent of each other [16, 17]. However, minutiae that are spatially close tend to have similar directions with each other [18]. Moreover, fingerprint ridges flow smoothly with very slow orientation change. The variance of the minutia directions in different regions of the fingerprint are dependent on both their locations and location variance [19, 20]. These observations on the dependency between minutiae need to be accounted for in eliciting reliable statistical models. The proposed model incorporates the distribution of minutiae and the dependency relationship between them. Minutiae are the most commonly used features for representing fingerprints. They correspond to ridge endings and ridge bifurcations. Each minutia is represented by its location and direction. The direction is determined by the ridge at the location. Automatic fingerprint matching algorithms use minutiae as the salient features [21], since they are stable and are reliably extracted. Each minutia is represented as x = (s, ?) where s = (x1 , x2 ) is its location and ? its direction. In order to capture the distribution of minutiae as well as the dependencies between them, we first propose a method to define a unique sequence for a given set of minutiae. Suppose that a fingerprint contains N minutiae. The sequence starts with the minutia x1 whose location is closest to the core point. Each remaining minutia xn is the spatially closest to the centroid defined by the arithmetic mean of the location coordinates of all the previous minutiae x1 , . . . xn?1 . Given this sequence, the fingerprint can be represented by a minutia sequence X = (x1 , . . . , xN ). The sequence is robust to the variance of the minutiae because the next minutia is decided by the all the previous minutiae. Given the observation that spatially closer minutiae are more strongly related, we only model the dependence between xn and its nearest minutia among {x1 , . . . , xn?1 }. Although not all the dependence is taken into account, this is a good trade-off between model accuracy and computational complexity. Figure 1(a) presents an example where x5 is determined because its distance to the centroid of {x1 , . . . , x4 } is minimal. Figure 1(b) shows the minutia sequence and the minutia 3 (a) Minutiae sequencing. (b) Minutiae dependency. Figure 1: Minutia dependency modeling: (a) given minutiae {x1 , . . . , x4 } with centroid c, the next minutia x5 is the one closest to c, and (b) following this procedure dependency between seven minutiae are represented by arrows. Figure 2: Bayesian network representing conditional dependencies shown in Figure 1, where xi = (si , ?I ). Note that there is a link between x1 and x2 while there is none between x2 and x3 . dependencies (arrows) for the same configuration of minutiae. Based on the characteristic of fingerprint minutiae studied in [18, 19, 20], we know that the minutia direction is related to its location and the neighboring minutiae. The minutia location is conditional independent of the location of the neighboring minutiae given their directions. To address the probabilistic relationships of the minutiae, Bayesian networks are used to represent the distributions of the minutia features in fingerprints. Figure 2 shows the Bayesian network for the distribution of the minutia set given in Figure 1. The nodes sn and ?n represent the location and direction of minutia xn . For each conditional distribution, a directed link is added to the graph from the nodes corresponding to the variables on which the distribution is conditioned. In general, for a given fingerprint, the joint distribution over its minutia set X is given by p(X) = p(s1 )p(?1 |s1 ) N Y p(sn )p(?n |sn , s?(n) , ??(n) ) (10) n=2 where s?(n) and ??(n) are the location and direction of the minutia xi which has the minimal spatial distance to the minutia xn . So ?(n) is given by ?(n) = argmin kxn ? xi k (11) i?[1,n?1] To compute above joint probability, there are three probability density functions need to be estimated: distribution of the location of minutiae f (s), joint distribution of the location and direction of minutiae f (s, ?), and conditional distribution of minutia direction given its location, and the location and direction of the nearest minutia f (?n |sn , s?(n) , ??(n) ). It is known that minutiae tend to form clusters [18] and minutiae in different regions of the fingerprint are observed to be associated with different region-specific minutia directions. A mixture of Gaussian is a natural approach to model the minutia location given by (12). Since minutia orientation is a periodic variable, it is modeled by the von Mises distribution which itself is derived from the Gaussian. The minutia represented by its location and direction is modeled by the mixture of joint Gaussian and von-Mises distribution [22] give by (13). Given its location and the nearest minutia, the minutia direction has the mixture of von-Mises density given by (14). f (s) = K1 X ?k1 N (s|?k1 , ?k1 ) k1 =1 4 (12) f (s, ?) = K2 X ?k2 N (s|?k2 , ?k2 )V(?|?k2 , ?k2 ) (13) k2 =1 f (?n |sn , s?(n) , ??(n) ) = K3 X ?k3 V(?n |?k3 , ?k3 ) (14) k3 =1 where Ki is the number of mixture components, ?ki are non-negative component weights that sum to one, N (s|?k , ?k ) is the bivariate Gaussian probability density function of minutiae with mean ?k and covariance matrix ?k , and V(?|?k , ?k ) is the von-Mises probability density function of minutia orientation with mean angle ?k and precision (inverse variance) ?k3 . Bayesian information criterion is used to estimate Ki and other parameters are learned by EM algorithm. 4 Evaluation of Rarity of a Fingerprint The general probability of random correspondence (PRC) can be modified to give the probability of matching the specific evidence within a database of n items, where the match is within some tolerance in feature space [23]. The metric of rarity is specific nPRC, the probability that data with value x coincides with an element in a set of n samples, within specified tolerance. Since we are trying to match a specific value x, this probability depends on the probability of x. Let Y = [y1 , ..., yn ] represent a set of n random variables. A binary-valued random variable z indicates that if one sample yi exists in a set of n random samples so that the value of yi is the same as x within a tolerance ?. By noting the independence of x and yi , the specific nPRC is then given by the marginal probability X p(z = 1|x) = p(z = 1|x, Y)p(Y) (15) Y where p(Y) is the joint probability of the n individuals. To compute specific nPRC, we first define correspondence or match, between two minutiae as follows. Let xa = (sa , ?a ) and xb = (sb , ?b ) be a pair of minutiae. The minutiae are said to correspond if for tolerance ? = [?s , ?? ], k sa ? sb k? ?s ? |?a ? ?b | ? ?? (16) where ksa ? sb k is the Euclidean distance between the minutia locations. Then, the match between two fingerprints is defined as existing at least m ? pairs of matched minutiae between two fingerprints. The tolerances ? and m ? depend on practical applications. To deal with the largely varying quality in latent fingerprints, it is also important to consider the minutia confidence in specific nPRC measurement. The confidence of the minutia xn is defined as (dsn , d?n ), where dsn is the confidence of location and d?n is the confidence of direction. Given the minutia xn = (sn , ?n ) and its confidences, the probability density functions of location s? and direction ?? can be modeled using Gaussian and von-Mises distribution given by c(s? |sn , dsn ) = N (s? |sn , d?1 sn ) (17) c(?? |?n , d?n ) = V(?? |?n , d?n ) (18) where the variance of the location distribution (Gaussian) is the inverse of the location confidence and the concentration parameter of the direction distribution (von-Mises) is the direction confidence. e and X f? Let f be a randomly sampled fingerprint which has minutia set X? = {x?1 , ..., x?M }. Let X be the sets of m ? minutiae randomly picked from X and X? ,where m ? ? N and m ? ? M . Using (10), e and X f? is given by the probability that there is a one-to-one correspondence between X where e = p? (s1 , ?1 ) p? (X) p? (sn , ?n ) = Z Z ZZ m ? Y p? (sn )p? (?n |sn , s?(n) , ??(n) ) (19) n=2 c(s? |sn , dsn )c(?? |?n , d?n )f (s, ?)ds? d?? dsd? s? ? ? |x?x? |?? 5 (20) p? (sn ) = Z p? (?n |sn , s?(n) , ??(n) ) = Z Z c(s? |sn , dsn )f (s)ds? ds (21) c(?? |?n , d?n )f (?|sn , s?(n) , ??(n) )d?? d? (22) s? |s?s? |??s Z ? ? |??? ? |??? Finally, the specific nPRCs can be computed by p? (X, m, ? n) = 1 ? (1 ? p? (X, m)) ? n?1 (23) where X represents the minutia set of given fingerprint, and p? (X, m) ? is the probability that m ? pairs of minutiae are matched between the given fingerprint and a randomly chosen fingerprint from n fingerprints. N (m  ? X ?) X m ? e i) ? p? (X (24) p? (X, m) ? = p(m ) m ? ? i=1 m ?M where M contains all possible numbers of minutiae in one fingerprint among n fingerprints, p(m? ) is e i = (xi1 , xi2 , ..., xim the probability of a random fingerprint having m? minutiae, minutia set X ? ) is e i ) is the joint probability of minutia set X e i given by (19). Gibbs sampling the subset of X and p? (X is used to approximate the integral involved in the probability calculation. 5 Model Validation In order to validate the proposed methods, core point prediction was first tested. Goodness-of-fit tests were performed on the proposed generative models. Two databases were used, one is NIST4, and the other is NIST27. NIST4 contains 8-bit gray scale images of randomly selected fingerprints. Each print has 512 ? 512 pixels. The entire database contains fingerprints taken from 2000 different fingers with 2 impression of the same finger. NIST27 contains latent fingerprints from crime scenes and their matching rolled fingerprint mates. There are 258 latent cases separated into three quality categories of good, bad, and ugly. 5.1 Core Point Prediction The Gaussian process models for core point prediction are trained on NIST4 and tested on NIST27. The orientation maps are extracted by conventional gradient-based approach. The fingerprint images are first divided into equal-sized blocks of N ? N pixels, where N is the average width of a pair of ridge and valley. The value of N is 8 in NIST4 and varies in NIST27. The gradient vectors are calculated by taking the partial derivatives of image intensity at each pixel in Cartesian coordinates. The ridge orientation is perpendicular to the dominant gradient angle in the local block. The training set consists of the orientation maps of the fingerprints and the corresponding core points which are marked manually. The core point prediction is applied on three groups of latent prints in different quality. Figure 3 shows the results of core point prediction and subsequent latent print localization given two latent fingerprints from NIST27. Table 1 shows the comparison of prediction precisions of Gaussian Processes (GP) based approach and the widely used Poincare Index (PI) [8]. The test latent prints are extracted and enhanced manually. The true core points of the latent prints are picked from the matching 10-prints. Correct prediction is determined by comparing the location and direction distances between predicted and true core points with the threshold parameters set at Ts = 16 pixels, and T? = ?/6. Good quality set contains 88 images that mostly contain the core points. Both bad and ugly quality sets contain 85 images that have small size and usually do not include core points. Among the precisions of good quality latent prints, two approaches are close. Precisions of bad and ugly quality show distinct difference between two methods and indicate that GP based method provides core point prediction even though the core points can not be seen in the latent prints. The GP based method also results in higher overall prediction precisions. 5.2 Goodness-of-fit The validation of the proposed generative model is by means of a goodness-of-fit test which determines as to how well a sample of data agrees with the proposed model distribution. The chi-square 6 (a) Latent print localization of case ?g90?. (b) Latent print localization of case ?g69?. Figure 3: Latent print localization: Left side images are the latent fingerprints (rectangles) collected from crime scenes. Right side images contain the predicted core points (crosses) and true core points (rounds) with the orientation maps of the latent prints. Table 1: Comparison of prediction precisions of PI and GP based approaches. Poincare Index Gaussian Processes Good 90.6% 93.1% Bad 68.2% 87.1% Ugly 46.6% 72.7% Overall 68.6% 84.5% statistical hypothesis test was applied [24]. Three different tests were conducted for : (i) distribution of minutia location (12), (ii) joint distribution of minutia location and orientation (13), and (iii) distributions of minutia dependency (14). For minutia location, we partitioned the minutia location space into 16 non-overlapping blocks. For minutia location and orientation, we partitioned the feature space into 16 ? 4 non-overlapping blocks. For minutia dependency, the orientation space is divided into 9 non-overlapping blocks. The blocks are combined with adjacent blocks until both observed and expected numbers of minutiae in the block are greater than or equal to 5. The test statistic used here is a chi-square random variable ?2 defined by the following equation. X (Oi ? Ei )2 (25) ?2 = Ei i where Oi is the observed minutia count for the ith block, and Ei is the expected minutia count for the ith block. The p-value, the probability of observing a sample statistic as extreme as the test statistic, associated with each test statistic ?2 is then calculated based on the chi-square distribution and compared to the significance level. For the NIST 4 dataset, we chose significance level equal to 0.01. 4000 fingerprints are used to train the generative models proposed in Sections 3. To test the models for minutia location, and minutia location and orientation, the numbers of fingerprints with p-values above (corresponding to accept the model) and below (corresponding to reject the model) the significance level are computed. Of the 4000 fingerprints, 3387 are accepted and 613 are rejected for minutia location model, and 3216 are accepted and 784 are rejected for minutia location and orientation model. To test the model for minutia dependency, we first collect all the linked minutia pairs in the minutia sequences produced from 4000 fingerprints. Then these minutia pairs are separated by the binned locations of both minutiae (32 ? 32) and orientation of the leading minutia (4). Finally, the minutia dependency models can be tested on the corresponding minutia pair sets. Of the 4096 data sets, 3558 are accepted and 538 are rejected. The results imply that the proposed generative models offer reasonable and accurate fit to fingerprints. Table 2: Results from the Chi-square tests for testing the goodness of fit of three generative models. Generative models f (s) f (s, ?) f (?n |sn , s?(n) , ??(n) ) Dataset sizes 4000 4000 4096 7 Model accepted 3387 3216 3558 Model rejected 613 784 538 (a) Latent case ?b115?. (b) Latent case ?g73?. Figure 4: Two latent cases: The left images are the crime scene photographs containing the latent fingerprints and minutiae. The right images are the preprocessed latent prints with aligned minutiae with predicted core points. Table 3: Specific nPRCs for the latent fingerprints ?b115? and ?g73?, where n = 100, 000. Latent Print ?b115? Latent Print ?g73? N m ? p? (m, ? X) N m ? p? (m, ? X) 2 0.73 4 1 4 9.04 ? 10?6 8 3.11 ? 10?14 16 8 2.46 ? 10?19 39 12 2.56 ? 10?25 12 6.13 ? 10?31 24 3.10 ? 10?52 ?46 16 1.82 ? 10 39 7.51 ? 10?79 6 Fingerprint Rarity measurement on Latent Prints The method for assessing fingerprint rarity using the validated model is demonstrated here. Figure 4 shows two latent fingerprints randomly picked from NIST27. The first latent print ?b115? contains 16 minutiae and the second ?g73? contains 39 minutiae. The confidences of minutiae are manually assigned by visual inspection. The specific nPRC of the two latent prints are given by Table 3. The specific nPRCs are calculated through varying numbers of matching minutia pairs (m), ? assuming that the number of fingerprints (n) is 100, 000. The tolerance is set at ?s = 10 pixels and ?? = ?/8. The experiment shows that the values of specific nPRC are largely dependent on the given latent fingerprint. For the latent print that contains more minutiae or whose minutiae are more common in minutia population, the probability that the latent print shares m ? minutiae with a random fingerprint is more. It is obvious to note that, when m ? decreases, the probability of random correspondence increases. Moreover, the values of specific nPRC provide a strong argument for the values of latent fingerprint evidences. 7 Summary This work is the first attempt of offering a systematic method to measure the rarity of fingerprints. In order to align the prints, a Gaussian processes based approach is proposed to predict the core points. It is proven that this approach can predict core points whether the prints contain the core points or not. Furthermore, a generative model is proposed to model the distribution of minutiae as well as the dependency between them. Bayesian networks are used to perform inference and learning by visualizing the structures of the generative models. Finally, the rarity of a fingerprint is able to calculated. To further improve the accuracy, minutia confidences are taken into account for specific nPRC calculation. Goodness of fit tests shows that the proposed generative offers an accurate fingerprint representation. We perform the specific nPRC computation on NIST27 dataset. It is shown that the proposed method is capable of estimating the rarity of real-life latent fingerprints. Acknowledgments This work was supported by the United States Department of Justice award NIJ: 2009-DN-BXK208. The opinions expressed are those of the authors and not of the DOJ. 8 References [1] R. Chakraborty. Statistical interpretation of DNA typing data. American Journal of Human Genetics, 49(4):895?897, 1991. [2] United States Court of Appeals for the Third Circuit: USA v. Byron Mitchell, 2003. No. 02-2859. [3] D.V. Lindley. A problem in forensic science. Biometrika, 64(2):207?213, 1977. [4] C. Neumann, C. Champod, R. Puch-Solis, N. Egli, A. Anthonioz, and A. Bromage-Griffiths. Computation of likelihood ratios in fingerprint identification for configurations of any number of minutiae. Journal of Forensic Sciences, 51:1255?1266, 2007. [5] S.N. Srihari and H. Srinivasan. Comparison of ROC and Likelihood Decision Methods in Automatic Fingerprint Verification. International J. Pattern Recognition and Artificial Intelligence, 22(1):535?553, 2008. [6] A.K. Jain and D. Maltoni. Handbook of Fingerprint Recognition. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2003. [7] M. Kawagoe and A. Tojo. Fingerprint pattern classification. Pattern Recogn., 17(3):295?303, 1984. [8] A.M. Bazen and S.H. Gerez. Systematic methods for the computation of the directional fields and singular points of fingerprints. IEEE Trans. Pattern Anal. Mach. Intell., 24(7):905?919, 2002. [9] A.K. Jain, S. Prabhakar, and L. Hong. A multichannel approach to fingerprint classification. IEEE Trans. Pattern Anal. Mach. Intell., 21(4):348?359, 1999. [10] A.K. Jain, S. Prabhakar, L. Hong, and S. Pankanti. Filterbank-based fingerprint matching. IEEE Transactions on Image Processing, 9:846?859, 2000. [11] D. Phillips. A fingerprint orientation model based on 2d fourier expansion (fomfe) and its application to singular-point detection and fingerprint indexing. IEEE Trans. Pattern Anal. Mach. Intell., 29(4):573?585, 2007. [12] X. Wang, J. Li, and Y. Niu. Definition and extraction of stable points from fingerprint images. Pattern Recogn., 40(6):1804?1815, 2007. [13] M. Liu, X. Jiang, and A.C. Kot. Fingerprint reference-point detection. EURASIP J. Appl. Signal Process., 2005:498?509, 2005. [14] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. the MIT Press, 2006. [15] S. Pankanti, S. Prabhakar, and A.K. Jain. On the individuality of fingerprints. IEEE Trans. Pattern Anal. Mach. Intell., 24(8):1010?1025, 2002. [16] Y. Zhu, S.C. Dass, and A.K. Jain. Statistical models for assessing the individuality of fingerprints. IEEE Transactions on Information Forensics and Security, 2(3-1):391?401, 2007. [17] Y. Chen and A.K. Jain. Beyond minutiae: A fingerprint individuality model with pattern, ridge and pore features. In ICB ?09 Proceedings, pages 523?533, Berlin, Heidelberg, 2009. Springer-Verlag. [18] S.C. Scolve. The occurence of fingerprint characteristics as a two dimensional process. Journal of the American Statistical Association, 367(74):588?595, 1979. [19] D.A. Stoney. Distribution of epidermal ridge minutiae. American Journal of Physical Anthropology, 77:367?376, 1988. [20] J. Chen and Y. Moon. A statistical study on the fingerprint minutiae distribution. In ICASSP 2006 Proceedings., volume 2, pages II?II, 2006. [21] C. Watson, M. Garris, E. Tabassi, C. Wilson, R. McCabe, and S. Janet. User?s Guide to NIST Fingerprint Image Software 2 (NFIS2). NIST, 2004. [22] C. Bishop. Pattern Recognition and Machine Learning. Springer, New York, 2006. [23] C. Su and S.N. Srihari. Probability of random correspondence for fingerprints. In IWCF ?09 Proceedings, pages 55?66, Berlin, Heidelberg, 2009. Springer-Verlag. [24] R.B. D?Agostino and M.A. Stephens. Goodness-of-fit Techniques. CRC Press, 1986. 9
3945 |@word chakraborty:1 justice:1 covariance:6 configuration:2 contains:10 liu:1 united:2 offering:1 existing:1 comparing:1 si:1 subsequent:1 generative:14 selected:2 intelligence:1 item:1 inspection:1 ith:2 core:46 lr:3 provides:2 node:2 location:40 dn:1 dsn:5 consists:1 manner:1 expected:2 multi:2 chi:4 minutia:118 underlying:1 moreover:2 matched:2 estimating:1 circuit:1 null:2 mccabe:1 argmin:1 developed:2 finding:2 nj:1 biometrika:1 k2:7 filterbank:1 yn:2 engineering:1 local:1 mach:4 jiang:1 niu:1 birthday:2 chose:1 anthropology:1 studied:1 collect:1 appl:1 co:1 perpendicular:1 decided:1 unique:2 directed:1 practical:1 testing:1 acknowledgment:1 practice:1 block:10 x3:1 procedure:2 area:3 poincare:3 reject:1 matching:6 confidence:10 refers:1 regular:1 griffith:1 prc:1 close:2 valley:1 janet:1 conventional:1 map:12 demonstrated:1 center:2 maximizing:1 williams:1 resolution:1 population:1 coordinate:3 enhanced:2 suppose:3 gm:3 user:1 hypothesis:3 origin:1 element:2 recognition:3 continues:1 database:6 observed:3 solved:1 capture:1 wang:1 region:3 trade:1 decrease:1 mentioned:1 complexity:1 dispense:1 trained:1 depend:1 predictive:5 localization:4 icassp:1 joint:11 represented:6 finger:3 recogn:2 train:1 separated:2 distinct:1 jain:6 artificial:1 whose:6 widely:1 solve:1 valued:1 cov:2 gi:7 g1:1 statistic:4 gp:5 highlighted:1 itself:2 advantage:1 sequence:7 propose:1 product:1 neighboring:2 aligned:5 loop:2 flexibility:1 description:1 secaucus:1 validate:1 cluster:1 xim:1 assessing:3 neumann:1 prabhakar:3 nearest:3 sa:2 strong:1 predicted:4 involves:1 come:5 indicate:1 direction:24 correct:2 human:1 opinion:1 crc:1 require:1 singularity:2 exp:1 k3:6 predict:7 agrees:1 mit:1 gaussian:27 modified:1 varying:4 wilson:1 validated:3 ax:3 derived:1 sequencing:1 likelihood:6 indicates:1 centroid:3 detect:1 inference:1 dependent:2 sb:3 entire:1 accept:1 pixel:5 overall:2 among:4 orientation:23 classification:2 proposes:1 spatial:4 icb:1 bifurcation:1 marginal:1 field:3 equal:3 never:1 having:1 extraction:1 sampling:1 zz:1 x4:2 represents:3 manually:3 randomly:6 kawagoe:1 intell:4 individual:2 replaced:1 argmax:1 attempt:1 detection:3 fingertip:1 evaluation:7 rolled:1 mixture:4 extreme:1 nprc:9 xb:1 accurate:2 tuple:1 closer:1 partial:2 necessary:1 integral:1 capable:1 biometrics:1 euclidean:1 nij:1 minimal:2 instance:1 modeling:1 gn:1 ksa:1 goodness:8 subset:1 uniform:1 conducted:1 characterize:1 dependency:15 varies:1 periodic:1 combined:1 person:1 density:5 explores:1 amherst:1 international:1 systematic:3 probabilistic:2 off:1 xi1:1 pore:1 squared:2 von:6 containing:1 american:3 derivative:1 leading:1 li:1 account:3 north:1 inc:1 depends:1 piece:1 sine:1 performed:1 picked:3 observing:1 linked:1 portion:1 start:1 lindley:1 square:4 oi:2 accuracy:2 moon:1 variance:6 characteristic:2 largely:2 spaced:1 correspond:2 directional:1 bayesian:7 identification:1 produced:1 none:1 confirmed:1 definition:2 against:1 involved:1 obvious:2 associated:3 mi:6 sampled:1 dataset:3 popular:1 mitchell:1 forensics:3 higher:1 wherein:1 specify:1 formulation:1 done:2 evaluated:3 strongly:1 though:1 furthermore:1 xa:1 rejected:4 until:1 d:3 ei:3 su:2 overlapping:5 quality:8 reveal:1 gray:2 usa:2 contain:6 true:3 assigned:1 kxn:1 spatially:3 deal:1 round:1 x5:2 adjacent:1 width:1 visualizing:1 coincides:1 criterion:1 hong:2 trying:1 presenting:1 impression:1 complete:3 ridge:13 image:12 common:1 rotation:1 physical:1 conditioning:1 volume:1 association:1 interpretation:1 measurement:2 gibbs:1 phillips:1 automatic:2 grid:1 fingerprint:99 stable:2 similarity:4 gj:1 align:2 dominant:1 curvature:1 closest:3 recent:1 verlag:3 binary:1 watson:1 life:1 yi:4 seen:2 greater:1 determine:3 maximize:1 signal:1 arithmetic:1 ii:3 full:1 stephen:1 match:7 characterized:2 calculation:2 cross:1 offer:2 divided:2 award:1 impact:1 prediction:13 das:1 regression:2 vision:1 metric:2 kernel:1 represent:4 singular:2 source:2 tend:2 byron:1 flow:3 incorporates:1 presence:1 noting:1 iii:1 independence:1 fit:9 topology:1 court:2 whether:1 effort:1 york:2 involve:1 category:1 dna:3 multichannel:1 estimated:2 srinivasan:1 group:1 salient:1 procedural:1 threshold:1 preprocessed:1 registration:3 rectangle:1 graph:1 sum:1 angle:3 parameterized:1 uncertainty:1 inverse:2 reasonable:1 decision:1 bit:1 ki:3 individuality:3 correspondence:10 binned:1 scene:6 x2:3 software:1 nearby:1 fourier:2 argument:1 department:2 describes:2 em:1 partitioned:2 s1:3 indexing:1 taken:3 equation:1 pankanti:2 turn:1 count:2 xi2:1 needed:1 know:1 denotes:2 remaining:1 include:1 k1:5 especially:1 eliciting:1 skin:1 print:37 realized:1 added:1 parametric:1 concentration:1 dependence:2 said:1 gradient:4 distance:4 link:2 berlin:2 dsd:1 seven:1 collected:4 assuming:1 index:3 relationship:4 modeled:3 ratio:4 difficult:2 mostly:1 statement:1 stated:1 negative:1 anal:4 reliably:1 unknown:1 perform:2 observation:4 mate:1 predication:1 nist:3 buffalo:2 t:1 y1:2 intensity:2 ordinate:1 introduced:1 pair:9 specified:1 optimized:1 crime:6 security:1 learned:1 epidermal:1 trans:4 address:1 able:1 beyond:1 garris:1 usually:5 pattern:11 below:1 kot:1 challenge:1 reliable:3 critical:1 natural:1 typing:1 forensic:3 zhu:1 representing:3 improve:1 imply:1 occurence:1 sn:18 prior:2 determining:1 relative:2 proven:1 validation:2 degree:2 verification:1 pi:3 share:1 translation:1 genetics:1 summary:1 accounted:1 supported:1 last:1 rasmussen:1 side:2 ugly:4 guide:1 taking:1 tolerance:7 calculated:4 xn:9 gram:1 ending:1 author:1 made:1 commonly:2 collection:1 simplified:1 transaction:2 approximate:1 handbook:1 xi:3 latent:38 table:5 robust:2 heidelberg:2 expansion:2 domain:1 significance:5 arrow:2 noise:4 hyperparameters:1 x1:8 rarity:19 roc:1 ny:1 slow:1 precision:6 exponential:3 third:1 bad:4 specific:18 bishop:1 list:1 appeal:1 evidence:17 concern:1 intrinsic:1 bivariate:1 exists:1 conditioned:1 cartesian:2 chen:2 smoothly:1 photograph:1 srihari:4 sargur:1 visual:1 expressed:2 chang:1 springer:4 corresponds:1 chance:1 determines:1 extracted:4 conditional:4 identity:2 sized:1 marked:1 change:1 eurasip:1 determined:5 called:1 accepted:4 tested:3
3,252
3,946
Synergies in learning words and their referents Katherine Demuth Department of Linguistics Macquarie University Sydney, NSW 2109 [email protected] Mark Johnson Department of Computing Macquarie University Sydney, NSW 2109 [email protected] Michael Frank Department of Psychology Stanford University Palo Alto, CA 94305 [email protected] Bevan K. Jones School of Informatics University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB, UK [email protected] Abstract This paper presents Bayesian non-parametric models that simultaneously learn to segment words from phoneme strings and learn the referents of some of those words, and shows that there is a synergistic interaction in the acquisition of these two kinds of linguistic information. The models themselves are novel kinds of Adaptor Grammars that are an extension of an embedding of topic models into PCFGs. These models simultaneously segment phoneme sequences into words and learn the relationship between non-linguistic objects to the words that refer to them. We show (i) that modelling inter-word dependencies not only improves the accuracy of the word segmentation but also of word-object relationships, and (ii) that a model that simultaneously learns word-object relationships and word segmentation segments more accurately than one that just learns word segmentation on its own. We argue that these results support an interactive view of language acquisition that can take advantage of synergies such as these. 1 Introduction Conventional views of language acquisition often assume that human language learners initially use a single source of information to acquire one component of language, which they then use to leverage the acquisition of other linguistic components. For example, Kuhl [1] presents a standard ?bootstrapping? view of early language acquisition in which successively more difficult tasks are addressed by learners, beginning with phoneme inventory and progressing to word segmentation and word learning. This view is also taken implicitly by, e.g., Graf Estes et al [2], who showed that infants were more successful in mapping novel objects to novel words after those words had been successfully segmented from the speech stream. We contrast this view with an ?interactive? view of language acquisiion in which learners do not move from problem to problem, but instead attempt to learn all of the components of language at once. Computationally speaking, an interactive account views language acquisition as a joint inference problem for all components of language simultaneously, rather than a discrete sequence of inference problems for individual language components. (We are thus using ?interactive? to refer to the way that language acquisition is formulated as an inference problem, rather than a specific mechanism or architecture as in [3]). One advantage of an interactive approach is that it can take advantage of synergies in acquisition, i.e., situations where partial knowledge of several different aspects of language mutually aid their acquisition, i.e., where improvements in the acquisition of component A also improves the acqui1 PIG|DOG i Mz ND M& Mt ND Me Np MI Mg | {z } PIG Figure 1: The photograph indicates non-linguistic context containing the (toy) pig and dog for the utterance Is that the pig?. Below that, we show the input provided to our models representing this utterance [8]. The objects in the non-linguistic context are indicated by the prefix ?PIG|DOG?, which is followed by the unsegmented phonemicised input. The possible word segmentation points are indicated by separators between the phonemes. The correct analysis of this input (which is not provided to the model) is depicted by blue annotations to this input. The correct word segmentation is indicated by the filled blue word separators, and the mapping between words and non-linguistic objects is indicated by the underbrace subscript. sition of component B, and improvements in the acquisition of component B also improves the acquisition of component A. An interactive approach can take advantage of both of these, while staged approach to activation where A is learned before B forgoes the ability to use knowledge of B to help learn A. In this paper we focus on the acquisition of two of the simpler aspects of language: (i) segmenting sentences into words (thereby identifying their pronunciations), and (ii) the relationship between words and the objects they refer to. We present a sequence of models for inferring (i) and (ii), and demonstrate synergistic interactions in learning. Specifically, we show that (i) modifying the model in a way that improves its word segmentation ability also improves its ability to identify the intended referents of utterances, and that (ii) incorporating a more sophisticated model of the relationship between words and the objects they refer to also improves the model?s ability to segment words. The acquisition of word pronunciations is viewed as a segmentation problem as follows. Following Elman [4] and Brent [5, 6], a corpus of child-directed speech is ?phonemicised? by looking each word up in a pronouncing dictionary and concatenating those pronunciations. For example, the mother?s utterance Is that the pig is mapped to the broad phonemic representation Iz D&t D6 pIg (in an ASCII-based broad phonemic encoding), which are then concatenated to form IzD&tD6pIg. The word segmentation task is to segment a corpus of such unsegmented utterance representations into words, thus identifying the pronunciations of the words in the corpus. We study the acquisition of the relationship between words and the objects they refer to using the framework proposed by Frank et al [7]. Here each utterance in the corpus is labelled with the contextually-relevant objects that the speaker might be referring to. These are determined by inspecting videos of the utterance context. For example, in the context of Figure 1, the utterance would be labelled with the two contextually-relevant objects PIG and DOG. The learner?s task is identify which words, if any, in the utterance refer to each of these objects. Jones et al [8] combined the word segmentation and word reference tasks into a single inference task, where the goal is to simultaneously segment the utterance into words, and to map a subset of the words of each utterance to the utterance?s contextually-relevant objects. This is the task that we investigate in this paper. The rest of this paper is structured as follows. The next section summarises previous work on word segmentation and learning the relationship between words and their referents. Section 3 introduces Adaptor Grammars, explains how they can be used for word segmentation and topic modelling, and presents the Adaptor Grammars that will be used in this paper. Section 4 presents experimental 2 results showing synergistic interactions between word segmentation and learning the relationship between words and the objects they refer to, while section 5 summarises and concludes the paper. 2 Previous work Word segmentation has been studied using a wide variety of computational perspectives. Elman [4] and Brent [5, 6] introduced the basic word segmentation paradigm investigated here. Goldwater et al [9] introduced a non-parametric model of word segmentation based on Hierarchical Dirichlet Processes (HDPs) [10], and demonstrated that a bigram model, which captures dependencies between adjacent words, produces significantly more accurate segmentations than a unigram model, which assumes each word in a sentence is generated independently. Because the unigram model makes the ?bag of words? assumption it has no way to capture inter-word dependencies. Because there are strong inter-word dependencies in real language, e.g., a noun like ball is very likely to be preceeded by determiners the or a, a unigram model tends to undersegment, e.g., misanalyse the ball as a single word. The bigram model, because it explicitly models and hence can ?explain away? the dependency between the and ball, is more likely to correctly segment this example. Johnson et al [11] introduced a generalisation of Probabilistic Context-Free Grammars (PCFGs) called Adaptor Grammars (AGs) as a framework for specifying HDPs for linguistic applications (because this paper relies heavily on AGs we describe them in more detail in section 3 below). Johnson [12] investigated AGs for word segmentation that capture a range of different kinds of generalisations. The unigram AG replicates the unigram segmentation model of Goldwater et al, and suffers from the same undersegmentation problems. It turns out that it is not possible to express Goldwater et al?s bigram model as an AG, but a collocation AG, which is a HDP that generates a sentence as a sequence of collocations where each collocation is a sequence of words, captures similiar inter-word dependencies and produces very similiar word segmentation results. The acquisition of the mapping between words and the objects they refer to was studied by Frank et al [7]. They used a modified version of the LDA topic model [13] where the ?topics? are contextually-relevant objects that words in the utterance can refer to, so the mapping from ?topics? to words effectively specifies which words refer to these contextually-salient objects. Jones et al [8] integrated the Frank et al ?topic? model of the word-object relationship with the unigram model of Goldwater et al to obtain a joint model that both performs word segmentation and also learns which words refer to which contextually-salient objects. Johnson [14] explains how LDA topic models can be expressed as PCFGs. We use this reduction to express Frank et al models [7] of the word to object relationship as AGs which also incorporate Johnson?s [12] models of word segmentation. The resulting AGs can express a wide range of joint HDP models of word segmentation and the word-object relationship, including the model proposed by Jones et al [8], as well as several generalisations. 3 Adaptor grammars for segmentation and word-object acquisition This section provides an informal introduction to Adaptor Grammars (AGs) and how they can be used to express word segmentation and topic models, and presents the AGs for joint segmentation and acquisition of the word-object relationship. For more detail on the formal properties of AGs see [11], and for information on AG inference procedures see [15, 16]. 3.1 Probabilistic Context-Free Grammars Adaptor Grammars (AGs) are an extension of Probabilistic Context-Free Grammars (PCFGs), which we describe first. A Context-Free Grammar (CFG) G = (N, W, R, S) consists of disjoint finite sets of nonterminal symbols N and terminal symbols W , a finite set of rules R of the form A ? ? where A ? N and ? ? (N ? W )? , and a start symbol S ? N . (We assume there are no ?-rules? in R, i.e., we require that |?| ? 1 for each A ? ? ? R). A CFG G generates a set of finite, labelled, ordered trees TX for each X ? N ? W . If X ? W (i.e., X is a terminal) then TX = {X}, i.e., the singleton set consisting of a one-node tree labelled X. If X ? N then TX consists of all trees t whose root node is labelled X, each leaf node?s label is in 3 W , each non-leaf node?s label is in N , and for each non-leaf node x in t with label A ? N there is a rule A ? ? ? R such that the sequence of labels of x?s children is ?. The set of strings generated by G is the set of yields of TS , where the yield of a tree is sequence of its leaf nodes? labels. A Probabilistic Context-Free Grammar PCFG is a quintuple (N, W, R, P S, ?) where (N, W, R, S) is a CFG and ? is a vector of non-negative reals indexed by R that satisfy ??RA ?A?? = 1 for each A ? N , where RA = {A ? ? : A ? ? ? R} is the set of rules expanding A. Informally, ?A?? is the probability of a node labelled A expanding to a sequence of nodes labelled ?, and the probability of a tree is the product of the probabilities of the rules used to construct each non-leaf node in it. More precisely, for each X ? N ? W a PCFG associates distributions GX over the trees TX as follows: If X ? W (i.e., if X is a terminal) then GX is the distribution that puts probability 1 on the singlenode tree labelled X. If X ? N (i.e., if X is a nonterminal) then: GX = X ?X?B1 ...Bn TDX (GB1 , . . . , GBn ) X?B1 ...Bn ?RX (1) where: TDA (G1 , . . . , Gn ) X P  P t1 . . . tn ! = n Y Gi (ti ). i=1 That is, TDA (G1 , . . . , Gn ) is a distribution over TA where each subtree ti is generated independently from Gi . The PCFG generates the distribution GS over the trees TS , where S is the start symbol; the distribution over the strings it generates is obtained by marginalising over the trees. In a Bayesian PCFG one puts Dirichlet priors Dir(?) on the rule probability vector ?, such that there is one Dirichlet parameter ?A?? for each rule A ? ? ? R. In the ?unsupervised? inference problem for a PCFG one is given a CFG, parameters ? for Dirichlet priors over the rule probabilities, and a corpus of strings. The task is to infer the corresponding posterior distribution over rule probabilities ?. Recently Bayesian inference algorithms for PCFGs have been described. Kurihara et al [17] describe a Variational Bayes algorithm for inferring PCFGs using a mean-field approximation, while Johnson et al [18] describe a Markov Chain Monte Carlo algorithm based on Gibbs sampling. 3.2 Modelling word-object reference using PCFGs This section presents a novel encoding of a Frank et al [7] model for identifying word-object relationships as a PCFG. It is an adaptation of the reduction of LDA topic models to PCFGs given by Johnson [14]. That paper showed how to construct a PCFG that generates the same distribution over a collection of documents as an LDA model, and where Bayesian inference for the PCFG?s rule probabilities yields the corresponding distributions as Bayesian inference of the corresponding LDA models. Because the Frank et al [7] model of the word-object relationship is very similiar to an LDA topic model, we can use the same techniques to design Bayesian PCFGs that infer word-object relationships. The models we investigate in this paper assume that the words in a single sentence refer to at most one non-linguistic object (although it would be easy to relax this restriction). In this subsection we assume that the vocabulary V (i.e., a set of words) is given, as is the set O of objects that they can refer to. Let O0 = O ? {?}, where ? is a distinguished ?null object? not in O, and let the nonterminals N = {S} ? {Ao , Bo : o ? O0 }, where Ao and Bo are nonterminals indexed by the o ? O. Informally, a nonterminal Bo expanding to word w ? V indicates that w refers to object o, while a B? expanding to w indicates that w is non-referential. The set of objects in the non-linguistic context of an utterance is indicated by prefixing the utterance with a context identifier associated with those objects, such as ?PIG|DOG? in Figure 1. A context identifier c is a subset of O0 that contains ? (i.e., the null object is always in context). We assume we are given a (non-empty) set C of context identifiers disjoint from V . Then the terminals of the 4 S Apig XX Apig Bpig XX Apig B? pig XX Apig B? the XX Apig B? that PIG|DOG is Figure 2: A tree generated by the reference PCFG encoding a Frank et al [7] model of the wordobject relationship. The yield of this tree corresponds to the sentence Is that the pig, and the context identifier is ?PIG|DOG?. PCFG are W = V ? C, and the rules R of the PCFG are all instances of the following schemata: S ? Ao Ao ? c Ao ? Ao B o Ao ? Ao B ? Bo ? w o ? O0 c ? C, o ? c o ? O0 o ? O0 o ? O0 , w ? V (2) We call this the reference PCFG because it generates word-object reference pairs. An example of a tree generated by this grammar is shown in Figure 2. This grammar generates sentences consisting of a context identifier followed by a sequence of words; e.g. PIG|DOG is that the pig. Informally, the rule expanding S picks an object o that the words in the object can refer to (if o = ? then all words in the sentence are non-referential). The first rule expanding Ao ensures that o is a member of that sentence?s non-linguistic context, the second rule generates a Bo that will ultimately generate a word w (which we take to indicate that w refers to o), while the third rule generates a word associated with the null object ?. A slightly more complicated PCFG, which we call the reference1 grammar, can enforce the requirement that there is at most one referential word in each sentence. This constraint often holds in the simple sentences that appear in infant-directed speech (e.g., in Is that the pig?, the pig is only mentioned once). S ? S B? S?c c?C S ? Ao B o o?O (3) Ao ? c c ? C, o ? c Ao ? Ao B ? o ? O Bo ? w o ? O0 , w ? V In this grammar the nonterminal labels function as states that record not just which object a referential word refers to, but also whether that referential word has been generated or not. Viewed top-down, the switch from S to Ao indicates that a word from Bo has just been generated (i.e., which we interpret as referring to object o). This object o is passed down the Ao chain generating words from B? ; the final expansion of Ao ? c checks that o is compatible with the context indicator c. 3.3 Adaptor grammars This subsection briefly reviews adaptor grammars; for more detail see [11]. An Adaptor Grammar (AG) is a septuple (N, W, R, S, ?, A, C) consisting of a PCFG (N, W, R, S, ?) in which a subset A ? N of the nonterminals are identified as adapted, and where each adapted nonterminal X ? A has an associated adaptor CX . An adaptor CX for X is a function that maps a distribution over trees TX to a distribution over distributions over TX . In this paper we use two-parameter PoissonDirirchlet distributions as adaptors, so the corresponding predictive distributions are Pitman-Yor Processes (PYPs). 5 Just as for a PCFG, an AG defines distributions GX over trees TX for each X ? N ? W . If X ? W or X 6? A then GX is defined just as for a PCFG above, i.e., using (1). However, if X ? A then GX is defined in terms of an additional distribution HX as follows: GX ? CX (HX ) X ?X?Y1 ...Ym TDX (GY1 , . . . , GYm ) HX = X?Y1 ...Ym ?RX That is, the distribution GX associated with an adapted nonterminal X ? A is a sample from ?adapting? (i.e., applying CX to) its ?ordinary? PCFG distribution HX . Just as with the PCFG, an AG generates the distribution over trees GS , where S ? N is the start symbol. However, while GS in a PCFG is a fixed distribution (given the rule probabilities ?), in an AG the distribution GS is itself a random variable (because each GX for X ? A is random). Informally, an AG can be understood as caching the trees associated with adapted nonterminals. Generating a tree associated with an adapted nonterminal involves either reusing an already generated tree from the cache, or else generating a ?fresh? tree as in a PCFG. 3.4 Word segmentation with adaptor grammars AGs can be used as models of word segmentation, which we briefly review here; see Johnson [12] for more details. The input to the AG consists of a corpus of phoneme strings. For example, the phoneme string corresponding to Is that the pig? (with its correct segmentation indicated in blue) is as follows: i Mz ND M& Mt ND Me Np MI Mg We can represent any possible segmentation of any possible sentence as a tree generated by the following unigram AG. Sentence ? Word+ (4) Word ? Phoneme+ Phonemes ? a | b | . . . The trees generated by this adaptor grammar are the same as the trees generated by the CFG rules. (In this and following grammars, the Kleene ?+? is expanded into a set of left-recursive rules). For example, the following skeletal parse in which all but the Word nonterminals are suppressed (the others are deterministically inferrable) shows the parse that corresponds to the correct segmentation of the string above. (Word i z) (Word D & t) (Word D e) (Word p I g) Because the Word nonterminal in the AG is adapted (indicated here by underlining) the adaptor grammar learns the probability of the entire Word subtrees (e.g., the probability that pIg is a Word); see [12] for further details. This AG implements the unigram segmentation model of Goldwater et al [9], and as explained in section 2, it has the same tendancy to undersegment as the original unigram model. The collocation AG (5) produces a more accurate segmentation because it models (and therefore ?explain away?) some of the inter-word dependencies. Sentence ? Colloc+ Colloc ? Word+ (5) Word ? Phoneme+ Phonemes ? a | b | . . . The collocation AG is a hierarchical process, where the base distribution for the Colloc (collocation) nonterminal adaptor is generated from the Word distribution. The collocation AG generates a sentence as a sequence of Colloc (collocation) nonterminals, each of which is a sequence of Word nonterminals. It generates skeletal parses such as the following: (Colloc (Word i z)) (Colloc (Word D & t)) (Colloc (Word D e) (Word p I g)) In this parse, iz and D&t are analysed as both Words and Collocations, while De pIg is analysed as a Collocation consisting of two Words. Given training corpora like the ones we use here, the collocations this AG finds are often noun phrases. 6 3.5 Adaptor grammars for joint segmentation and word-object acquisition This section explains how to combine the word-object reference PCFGs presented in section 3.2 with the word segmentation AGs presented in section 3.4. Combining the word-object reference PCFGs (2) or (3) with the unigram AG (4) is relatively straight-forward; all we need to do is replace the last rule Bo ? w in these grammars with Bo ? Phoneme+ , i.e., the Bo nonterminals expand to an arbitray sequence of phonemes, and the Bo nonterminals are adapted, so these subtrees are cached and reused as appropriate. For example, the unigram-reference AG is as follows: S ? Ao Ao ? c Ao ? Ao Bo Ao ? Ao B? Bo ? Phoneme+ o ? O0 c ? C, o ? c o ? O0 o ? O0 o ? O0 The unigram-reference AG specifies essentially the same model as the one investigated in Jones et al [8], and the results below are consistent with those that Jones et al report. This grammar generates a skeletal parses such as the following: (B? i z) (B? D & t) (B? D e) (BPIG p I g) The unigram-reference1 AG is similiar to the unigram-reference AG, except that it stipulates that at most one word per sentence is associated with a (non-null) object. It is also possible to combine the word-object reference PCFGs with the collocation AG. The resulting AGs are straight-forward but more complex, so they are not shown here. The collocationreference AG is a combination of the collocation AG for word segmentation and the reference PCFG for modelling the word-object relationship. It permits an arbitrary number of words in a sentence to be referential. Interestingly, there are two different reasonable ways of combining the collocation AG with the reference1 PCFG. The collocation-reference1 AG requires that at most one word in a sentence is referential, just like the reference1 PCFG (3). The collocation-referenceC1 AG is similiar to the collocation-reference1 AG, except that it requires that at most one word in a collocation is referential. This means that the collocation-referenceC1 AG permits multiple referential words in a sentence (but they must all refer to the same object). This AG is linguistically plausible because a collocation often consists of a content word, which may be referential, surrounded by function words, which are generally not referential. 4 Experimental results We used the same training corpus as Jones et al [8], which was based on the corpus collected by Fernald et al [19] annotated with the objects in the non-linguistic context by Frank et al [7]. In these experiments we used the publically-available AG inference software described in [15]. Rather than specifying the concentration parameters of each Pitman-Yor Processes (PYPs) associated with the adapted nonterminals, that software permits us to place priors on them and sample them. Here we placed a uniform prior on all PYP a parameters and a sparse Gamma(100, 0.01) prior on the PYP b parameters. For each grammar we ran 8 MCMC chains for 5,000 iterations each over the corpus, and collected the sample parses from every 10th iteration from the last 2,500 iterations generated by each run. For each sentence in each sample we extracted the word segmentation and the word-object relationships the parse implies, so we obtained 2,000 sample analyses for each sentence in the corpus. We computed the modal (i.e., most frequent) analysis of each sentence, and this is what we scored below [15]. Perhaps the most basic question is: does non-linguistic context help word segmentation? We measure accuracy here by token f-score [9]. Jones et al [8] investigated this question by comparing analyses from what we are calling the unigram and unigram-reference models, and failed to find any overall effect of the non-linguistic context (although they did show that it improves the segmentation accuracy of referential words). However, as the following table shows, we do see a marked 7 improvement in word segmentation f-score when we combine non-linguistic context with the more accurate collocation models. Model unigram unigram-reference unigram-reference1 collocation collocation-reference collocation-reference1 collocation-referenceC1 word segmentation f-score 0.533 0.537 0.547 0.695 0.726 0.719 0.750 We can also ask the converse question: does better word segmentation improve sentence referent identification? Here we measure how well the models identify which object, if any, this sentence refers to, and does not directly evaluate word segmentation accuracy. The baseline model here assigns each sentence the ?null? ? object, achieving an accuracy of 0.709. As the table below shows, only the collocation-referenceC1 AG with its more complex constraints on the word-object relationship clearly surpasses this baseline. We can also measure the f-score with which the models identify non-? sentence referents; now the trivial baseline model achieves 0 f-score. Model unigram unigram-reference unigram-reference1 collocation collocation-reference collocation-reference1 collocation-referenceC1 sentence referent accuracy 0.709 0.702 0.503 0.709 0.728 0.440 0.839 sentence referent f-score 0 0.355 0.495 0 0.280 0.493 0.747 We see a marked improvement in sentence referent accuracy and sentence referent f-score with the collocation-referenceC1 AG. Finally, we can ask: how well do the models identify the head nouns of referring noun phrases, such as pIg in De pIg? We measure this by calculating the f-score of (word,object) token pairs identified by the model, where the object is not ?. This is a single number that indicates how good the models are at identifying referring words and the words that they refer to. Model unigram unigram-reference unigram-reference1 colloc collocation-reference collocation-reference1 collocation-referenceC1 topical word f-score 0 0.149 0.147 0 0.220 0.321 0.636 Again, we find that the collocation-referenceC1 AG identifies referring words and the objects they refer to more accurately than the other models. 5 Conclusion This paper has used Adaptor Grammars (AGs) to formulate a variety of models that jointly segment utterances into words and identify the objects in the non-linguistic context that some of these words refer to. The AGs differed in the kinds of generalisations they are capable of learning, and in the relationship between word segmentation and word reference that they assume. The most accurate results in word segmentation and in the identification of the word-object relationship were obtained by the collocation-referenceC1 AG that tightly integrates a collocation-based model of word segmentation with constraints that require no more than one referential word per collocation. As argued in the introduction, this is consistent with an ?interactive? approach to language learning. 8 References [1] Patricia K. Kuhl. Early language acquisition: Cracking the speech code. Nature Reviews Neuroscience, 5:831?843, 2004. [2] Katharine Graf Estes, Julia L. Evans, Martha W. Alibali, and Jenny R. Saffran. Can infants map meaning to newly segmented words? statistical segmentation and word learning. Psychological Science, 18(3):254?260, 2007. [3] James L. McClelland and David E. Rummelhart. An interactive activation model of context effects in letter perception. Psychological Review, 88(5):375?407, 1981. [4] Jeffrey Elman. Finding structure in time. Cognitive Science, 14:197?211, 1990. [5] M. Brent and T. Cartwright. Distributional regularity and phonotactic constraints are useful for segmentation. Cognition, 61:93?125, 1996. [6] M. Brent. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34:71?105, 1999. [7] Michael C. Frank, Noah Goodman, and Joshua Tenenbaum. Using speakers? referential intentions to model early cross-situational word learning. Psychological Science, 20:579?585, 2009. [8] Bevan K. Jones, Mark Johnson, and Michael C. Frank. Learning words and their meanings from unsegmented child-directed speech. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 501?509, Los Angeles, California, June 2010. Association for Computational Linguistics. [9] Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. A Bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21 ? 54, 2009. [10] Y. W. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566?1581, 2006. [11] Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 641?648. MIT Press, Cambridge, MA, 2007. [12] Mark Johnson. Using adaptor grammars to identifying synergies in the unsupervised acquisition of linguistic structure. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics, Columbus, Ohio, 2008. Association for Computational Linguistics. [13] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, 2003. [14] Mark Johnson. PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1148?1157, Uppsala, Sweden, July 2010. Association for Computational Linguistics. [15] Mark Johnson and Sharon Goldwater. Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 317?325, Boulder, Colorado, June 2009. Association for Computational Linguistics. [16] Shay B. Cohen, David M. Blei, and Noah A. Smith. Variational inference for adaptor grammars. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 564?572, Los Angeles, California, June 2010. Association for Computational Linguistics. [17] Kenichi Kurihara and Taisuke Sato. Variational Bayesian grammar induction for natural language. In 8th International Colloquium on Grammatical Inference, 2006. [18] Mark Johnson, Thomas Griffiths, and Sharon Goldwater. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 139?146, Rochester, New York, April 2007. Association for Computational Linguistics. [19] Anne Fernald and Hiromi Morikawa. Common themes and cultural variations in Japanese and American mothers? speech to infants. Child Development, 64(3):637?656, 1993. 9
3946 |@word briefly:2 version:1 bigram:3 nd:4 reused:1 bn:2 nsw:2 pick:1 thereby:1 reduction:2 contains:1 score:9 document:1 interestingly:1 prefix:1 comparing:1 anne:1 analysed:2 activation:2 must:1 evans:1 cracking:1 infant:4 leaf:5 beginning:1 smith:1 record:1 blei:3 provides:1 node:9 gx:9 uppsala:1 simpler:1 mcfrank:1 consists:4 combine:3 inter:5 ra:2 themselves:1 elman:3 terminal:4 cache:1 provided:2 xx:4 cultural:1 alto:1 null:5 what:2 kind:4 string:7 kleene:1 ag:50 finding:1 bootstrapping:1 every:1 ti:2 interactive:8 uk:2 platt:1 converse:1 appear:1 segmenting:1 before:1 t1:1 understood:1 tends:1 preceeded:1 encoding:3 subscript:1 might:1 au:2 studied:2 specifying:3 contextually:6 pcfgs:14 range:2 directed:3 recursive:1 implement:1 procedure:1 significantly:1 adapting:1 word:151 intention:1 refers:4 griffith:3 synergistic:3 put:2 context:25 applying:1 restriction:1 conventional:1 map:3 demonstrated:1 independently:2 formulate:1 identifying:5 assigns:1 rule:19 mq:2 embedding:1 variation:1 arbitray:1 heavily:1 colorado:1 associate:1 distributional:1 capture:4 ensures:1 mz:2 ran:1 mentioned:1 colloquium:1 ultimately:1 segment:8 predictive:1 learner:4 joint:5 chapter:4 tx:7 describe:4 monte:2 pronunciation:4 undersegment:2 whose:1 stanford:1 plausible:1 relax:1 grammar:34 ability:4 undersegmentation:1 cfg:5 g1:2 gi:2 jointly:1 itself:1 final:1 beal:1 sequence:12 advantage:4 mg:2 prefixing:1 interaction:3 product:1 adaptation:1 frequent:1 relevant:4 combining:2 olkopf:1 los:2 empty:1 requirement:1 regularity:1 produce:3 reference1:12 generating:3 cached:1 object:58 help:2 andrew:1 ac:1 nonterminal:9 adaptor:24 school:1 phonemic:2 strong:1 sydney:2 involves:1 indicate:1 implies:1 correct:4 annotated:1 modifying:1 human:5 explains:3 require:2 argued:1 hx:4 ao:22 inspecting:1 extension:2 exploring:1 hold:1 mapping:4 cognition:2 dictionary:1 early:3 achieves:1 determiner:1 linguistically:1 integrates:1 bag:1 label:6 palo:1 successfully:1 hoffman:1 mit:2 clearly:1 always:1 modified:1 rather:3 caching:1 probabilistically:1 linguistic:16 focus:1 june:3 improvement:4 referent:10 modelling:4 indicates:5 check:1 contrast:1 nonparameteric:1 baseline:3 progressing:1 inference:14 publically:1 collocation:39 integrated:1 entire:1 initially:1 expand:1 overall:1 pronouncing:1 development:1 noun:4 field:1 once:2 construct:2 pyps:2 ng:1 sampling:1 broad:2 jones:10 unsupervised:3 rummelhart:1 np:2 others:1 report:1 pyp:2 simultaneously:5 gamma:1 tightly:1 individual:1 intended:1 consisting:4 jeffrey:1 ab:1 attempt:1 investigate:2 patricia:1 replicates:1 introduces:1 chain:4 subtrees:2 accurate:4 capable:1 partial:1 sweden:1 filled:1 tree:22 indexed:2 psychological:3 instance:1 demuth:2 gn:2 ordinary:1 phrase:2 surpasses:1 subset:3 uniform:1 successful:1 nonterminals:10 johnson:16 dependency:7 dir:1 combined:1 referring:5 international:1 probabilistic:4 informatics:1 michael:4 ym:2 again:1 successively:1 containing:1 cognitive:1 brent:4 tdx:2 american:6 toy:1 reusing:1 account:1 singleton:1 de:2 north:4 satisfy:1 explicitly:1 stream:1 view:7 root:1 schema:1 start:3 bayes:1 complicated:1 annotation:1 rochester:1 accuracy:7 phoneme:13 who:1 yield:4 identify:6 goldwater:9 bayesian:11 identification:2 accurately:2 carlo:2 rx:2 saffran:1 straight:2 explain:2 suffers:1 ed:1 acquisition:21 james:1 associated:8 mi:2 newly:1 ask:2 knowledge:2 subsection:2 improves:7 segmentation:51 sophisticated:1 ta:1 modal:1 april:1 underlining:1 marginalising:1 just:7 parse:4 unsegmented:3 defines:1 lda:6 indicated:7 perhaps:1 columbus:1 name:1 effect:3 hence:1 phonotactic:1 adjacent:1 speaker:2 demonstrate:1 julia:1 tn:1 performs:1 meaning:2 variational:3 novel:4 recently:1 ohio:1 common:1 mt:2 cohen:1 association:13 interpret:1 refer:18 tda:2 cambridge:1 gibbs:1 mother:2 language:21 had:1 base:1 posterior:1 own:1 showed:2 perspective:1 meeting:2 joshua:1 fernald:2 additional:1 paradigm:1 july:1 ii:4 jenny:1 multiple:1 gb1:1 sound:1 infer:2 segmented:2 cross:1 basic:2 sition:1 essentially:1 iteration:3 represent:1 addressed:1 else:1 situational:1 source:1 goodman:1 sch:1 rest:1 member:1 jordan:2 call:2 leverage:1 crichton:1 easy:1 variety:2 switch:1 psychology:1 architecture:1 identified:2 angeles:2 whether:1 o0:12 passed:1 speech:6 speaking:1 york:1 compositional:1 generally:1 useful:1 informally:4 forgoes:1 nonparametric:1 referential:14 tenenbaum:1 mcclelland:1 generate:1 specifies:2 neuroscience:1 disjoint:2 hdps:2 correctly:1 per:2 blue:3 stipulates:1 discrete:1 skeletal:3 iz:2 express:4 salient:2 achieving:1 sharon:4 run:1 letter:1 place:1 reasonable:1 macquarie:2 followed:2 g:4 annual:5 sato:1 adapted:8 noah:2 precisely:1 constraint:4 software:2 calling:1 generates:13 aspect:2 quintuple:1 expanded:1 relatively:1 department:3 structured:1 ball:3 combination:1 kenichi:1 slightly:1 suppressed:1 bevan:2 explained:1 boulder:1 taken:1 computationally:1 mutually:1 turn:1 mechanism:1 staged:1 informal:1 available:1 permit:3 kuhl:2 hierarchical:3 away:2 enforce:1 appropriate:1 hiromi:1 distinguished:1 gym:1 original:1 thomas:3 assumes:1 dirichlet:6 linguistics:13 estes:2 top:1 calculating:1 concatenated:1 summarises:2 move:1 already:1 question:3 cartwright:1 parametric:2 concentration:1 mapped:1 street:1 d6:1 me:2 topic:11 argue:1 collected:2 trivial:1 fresh:1 induction:1 hdp:2 code:1 relationship:21 acquire:1 difficult:1 katherine:2 frank:11 negative:1 design:1 proper:1 teh:1 markov:2 sm:1 finite:3 similiar:5 t:2 situation:1 looking:1 head:1 y1:2 topical:2 arbitrary:1 introduced:3 david:3 dog:8 pair:2 sentence:29 california:2 learned:1 eh8:1 below:5 perception:1 pig:22 including:1 video:1 natural:1 indicator:1 representing:1 improve:1 technology:4 identifies:1 concludes:1 utterance:16 prior:5 review:4 discovery:1 graf:2 par:3 allocation:1 shay:1 consistent:2 editor:1 ascii:1 surrounded:1 compatible:1 token:2 placed:1 last:2 free:5 formal:1 wide:2 pitman:2 yor:2 edinburgh:2 sparse:1 grammatical:1 vocabulary:1 forward:2 collection:1 implicitly:1 synergy:4 corpus:11 b1:2 latent:1 gbn:1 table:2 learn:5 nature:1 ca:1 expanding:6 improving:1 inventory:1 expansion:1 investigated:4 complex:2 separator:2 japanese:1 did:1 main:1 scored:1 identifier:5 child:4 differed:1 aid:1 inferring:2 theme:1 deterministically:1 concatenating:1 third:1 learns:4 down:2 specific:1 unigram:25 showing:1 symbol:5 incorporating:1 pcfg:23 effectively:1 subtree:1 depicted:1 cx:4 photograph:1 likely:2 failed:1 expressed:1 ordered:1 bo:13 corresponds:2 relies:1 extracted:1 ma:1 viewed:2 formulated:1 goal:1 marked:2 labelled:8 replace:1 content:1 martha:1 specifically:1 determined:1 generalisation:4 except:2 kurihara:2 called:1 experimental:2 mark:9 support:1 incorporate:1 evaluate:1 mcmc:1
3,253
3,947
Variational Bounds for Mixed-Data Factor Analysis Mohammad Emtiyaz Khan University of British Columbia Vancouver, BC, Canada V6T 1Z4 [email protected] Guillaume Bouchard Xerox Research Center Europe 38240 Meylan, France [email protected] Benjamin M. Marlin University of British Columbia Vancouver, BC, Canada V6T 1Z4 [email protected] Kevin P. Murphy University of British Columbia Vancouver, BC, Canada V6T 1Z4 [email protected] Abstract We propose a new variational EM algorithm for fitting factor analysis models with mixed continuous and categorical observations. The algorithm is based on a simple quadratic bound to the log-sum-exp function. In the special case of fully observed binary data, the bound we propose is significantly faster than previous variational methods. We show that EM is significantly more robust in the presence of missing data compared to treating the latent factors as parameters, which is the approach used by exponential family PCA and other related matrix-factorization methods. A further benefit of the variational approach is that it can easily be extended to the case of mixtures of factor analyzers, as we show. We present results on synthetic and real data sets demonstrating several desirable properties of our proposed method. 1 Introduction Continuous latent factor models, such as factor analysis (FA) and probabilistic principal components analysis (PPCA), are very commonly used density models for continuous-valued data. They have many applications including latent factor discovery, dimensionality reduction, and missing data imputation. The factor analysis model asserts that a low-dimensional continuous latent factor zn ? RL underlies each high-dimensional observed data vector yn ? RD . Standard factor analysis models assume the prior on the latent factor has the form p(zn ) = N (zn |0, I), while the likelihood has the form p(yn |zn ) = N (yn |Wzn + ?, ?). W is the D ? L factor loading matrix, ? is an offset term, and ? is a D ? D diagonal matrix specifying the marginal noise variances. If we set ? = ? 2 I and require W to be orthogonal, we recover probabilistic principal components analysis (PPCA). Such models can be easily fit using the expectation-maximization (EM) algorithm [Row97, TB99]. The FA model can be extended to other members of the exponential family by requiring that the natural (canonical) parameters have the form Wzn + ? [WK01, CDS02, MHG08, LT10]. This is the unsupervised version of a generalized linear model (GLM), and is extremely useful since it allows for non-trivial dependencies between data variables with mixed types. The principal difficulty with the general FA model is computational tractability, both at training and test time. A problem arises because the Gaussian prior on p(zn ) is not conjugate to the likelihood except when yn also has a Gaussian distribution (the standard FA model). There are several approaches one can take to this problem. The simplest is to approximate the posterior p(zn |yn ) using a point estimate, which is equivalent to viewing the latent variables as parameters and estimating them by maximum likelihood. This approach is known as exponential family PCA (ePCA) 1 Graphical Model: Notation: ?z ? qn zn ?Ck ?D dk ?Ck D Wdk WkC k k YnC D Ynd d n ?w qn zn ynC D ynd D WkC , Wdk C D ?k , ?dk ?C k ? N L K Dc Dd Md + 1 Mixture indicator variable Latent factor vector Continuous data vector Discrete data variable Factor loading matrices Offset vectors Continuous noise covariance Mixture prior parameter # data cases # latent dimensions # mixture components # continuous variables # discrete variables # classes per discrete variable Figure 1: The generalized mixture of factor analyzers model for discrete and continuous data. [CDS02]. We refer to it as the ?MM? approach to fitting the general FA model since we maximize over zn in the E-step, as well as W in the M-step. The main drawback of the MM approach is that it ignores posterior uncertainty in zn , which can result in over-fitting unless the model is carefully regularized [WCS08]. This is a particular concern when we have missing data. The opposite end of the model estimation spectrum is to integrate out both zn and W using Markov chain Monte Carlo methods. This approach has recently been studied under the name ?Bayesian exponential family PCA? [MHG08] using a Hamiltonian Monte Carlo (HMC) sampling approach. We will refer to this as the ?SS? approach to indicate that we are integrating out both zn and W by sampling. The SS approach preserves posterior uncertainty about zn (unlike the MM approach) and is robust to missing data, but can have a significantly higher computational cost. In this work, we study a variational EM model fitting approach that preserves posterior uncertainty about zn , is robust to missing data, and is more computationally efficient than SS. We refer to this as the ?VM? approach to indicate that we integrate over zn in the E-step after applying a variational bound, and maximize over W in the M-step. We focus on the case of continuous (Gaussian) and categorical data. Our main contribution is the development of variational EM algorithms for factor analysis and mixtures of factor analyzers based on a simple quadratic lower bound to the multinomial likelihood (which subsumes the Bernoulli case) [Boh92]. This bound results in an EM iteration that is computationally more efficient than the bound previously proposed by Jaakkola for binary PCA when the training data is fully observed [JJ96], but is less tight. The proposed bound has advantages relative to other previously introduced bounds, as we discuss in the following sections. 2 The Generalized Mixture of Factor Analyzers Model In this section, we describe a model for mixed continuous and discrete data that we call the generalized mixture of factor analyzers model. This model has two important special cases: mixture models and factor analysis, both for mixed continuous and discrete data. We use the general model as well as both special cases in subsequent experiments. In this work, we focus on Gaussian distributed continuous data and multinomially distributed discrete data. The graphical model is given in Figure 1 while the probabilistic model is given in Equations 1 to 4. We begin with a description of the the general model and then highlight the two special cases. We let n ? {1 . . . N } index data cases, d ? {1 . . . Dd } index discrete data dimensions and k ? {1 . . . K} index mixture components. Superscripts C and D indicate variables associated with continuous and discrete data respectively. We let ynC ? RDc denote the continuous data vector and 2 D ? {1 . . . M + 1} denote the dth discrete data variable.1 We use a 1-of-(M + 1) encoding for the ynd D D discrete variables where a variable ynd = m is represented by a (M + 1)-dimensional vector ynd in which m?th element is set to 1, and all remaining elements equal 0. We denote the complete data   D D vector by yn = ynC , yn1 , . . . , ynD . d The generative process begins by sampling a state of the mixture indicator variable qn for each data case n from a K-state multinomial distribution with parameters ?. Simultaneously, a length L latent factor vector zn ? RL is sampled from a zero-mean Gaussian distribution with precision parameter ?z . Both steps are given in Equation 1. The natural parameters of the distribution over the data variables is obtained by passing the latent factor vector zn through a linear function defined by a factor loading matrix and an offset term, both of which depend on the setting of the mixture indicator variable qn . p(zn , qn |?) = N (zn |0, ??1 z IL )M(qn |?) C p(yn |zn , qn = k, ?) = N (ynC |WkC zn + ?C k , ?k ) (1) Dd Y D M(ynd |S(? ndk )) (2) d=1 D zn + ?D ? ndk = Wdk dk Sm (?) = exp[?m ? lse(?)] M +1 X lse(?) = log[ exp(?m )] (3) (4) (5) m=1 Assuming that qn = k, the continuous data vector ynC is Gaussian distributed with mean WkC zn + C D ?C k and covariance ?k , and each discrete data variable ynd is multinomially distributed with natural D D parameters ? ndk = Wdk zn + ?dk , as seen in Equation 2. Here, N (?|m, V) denotes a Gaussian distribution with mean m and covariance V, while M(?|?) denotes a multinomial distribution with P parameter vector ? such that i ?i = 1 and ?i ? 0. For the discrete data variables, the natural parameter vector is converted into the standard mean parameter vector through the softmax function S(?) = [S1 (?), . . . , SM +1 (?)], where Sm (?) is defined in Equation 4. The softmax function Sm (?) is itself defined in terms of the log-sum-exp (LSE) function, which we give in Equation 5. We note that the factor loading matrices for the k th mixture component are WkC ? RDc ?L and M +1 Dc D . We define the enand ?D ? RM +1?L , while the offsets are ?C Wdk dk ? R k ? R D D D C ] and , . . . , WD , W2k semble of factor loading matrices and offsets to be Wk = [Wk , W1k dk D D D C ?k = [?k , ?1k , ?2k , . . . , ?Dd k ], respectively. The complete set of parameters for this model is thus ? = {W1:K , ?1:K , ?C 1:K , ?, ?z }. To complete the model specification, we must specify the prior on these parameters. For each row of each factor loading matrix Wk , we use a Gaussian prior of the form N (0, ??1 w I). We use vague conjugate priors for the remaining parameters. As mentioned at the start of this section, this general model has two important special cases: generalized factor analysis and mixture models for mixed continuous and discrete data. The factor analysis model is obtained by using one mixture component and at least one latent factor (K = 1, L > 1). The mixture model is obtained by using no latent factors and at least one mixture component (K > 1, L = 0). In the mixture model case where L = 0, the distribution is modeled through the offset parameters ?k only. We will compare these three models in Section 5. Before concluding this section, we point out one key difference between the current model and other latent factor models for discrete data like multinomial PCA [BJ04] and latent Dirichlet allocation (LDA) [BNJ03]. In our model, the natural parameters for discrete data are defined on a lowdimensional linear subspace and are mapped to the mean parameters via the softmax function. In multinomial PCA and LDA, the mean parameters are instead directly defined on a low-dimensional linear subspace. The latter approach can also be extended to the mixed-data case [BDdF+ 03]. However, model fitting is even more computationally challenging than in our approach. In fact, the bounds we propose can be used in this alternative setting, but we leave this to future work. 1 Note that we assume all the discrete data variables have the same number of states, namely M + 1, for notational simplicity only. In the general case, the dth discrete variable has Md + 1 states. 3 3 Variational Bounds for Model Fitting In the standard expectation-maximization (EM) algorithm for mixtures of factor analyzers, the Estep consists of marginalizing over the complete-data log likelihood with respect to the posterior over the mixture indicator variable qn and latent factors zn . The M-step consists of maximizing the expected complete log likelihood with respect to the parameters ?. In the case of Gaussian observations, this posterior is available in closed form because of conjugacy. Introduction of discrete observations, however, makes it intractable to compute the posterior as the likelihood for these observations is not conjugate to the Gaussian prior on the latent factors. To overcome these problems, we propose to use a quadratic bound on the LSE function. This allows us to obtain closed form updates for both the E and M steps. We use the quadratic bound described in [Boh92]. In rest of the paper, we will refer to it as the ?Bohning bound?. For simplicity, we describe the bound only for one discrete measurement with K = 1 and ?k = 0 in order to suppress the n, k and d subscripts. To ensure identifiability, we assume that the last element of ? is zero (this can be enforced by setting the last row of W to zero). The key idea behind the Bohning bound is to take a second order Taylor series expansion of the LSE function around a point ?. An upper bound to the LSE function is found by replacing the Hessian matrix H(?), which appears in the second order term, with a fixed matrix A such that A ? H(?) is positive definite for all ? [Boh92]. Bohning gives one such matrix A, which we define below. The expansion point ? is a free variational parameter that must be optimized. 1 T ? A? ? bT? ? + c? 2 1 [IM ? 1M 1TM /(M + 1)] = 2 = A? ? S(?) 1 T = ? A? ? S(?)T ? + lse(?) 2 lse(?) ? (6) A (7) b? c? (8) (9) ? ? RM is the vector of variational parameters, IM is the identity matrix of size M ? M and 1M is a vector of ones of length M . By substituting this bound in to the log-likelihood, completing the square and exponentiating, we obtain the Gaussian lower bound described below. We obtain a ? ? corresponding to the discrete observation yD . Gaussian-like ?pseudo? observation y p(yD |z, W) ? h(?)N (? y? |Wz, A?1 ) ?? y = A ?1 (10) D (b? + y ) (11)  1 T ?1 12 ? A? h(?) = |2?A | exp y y? ? c? (12) 2 ? We use this result to obtain a lower bound for each mixed data vector yn . We will suppress the ?n = ? subscripts, which differ for each data point n and each discrete variable d for clarity. Let y ? 1,n , . . . , y ? Dd ,n ] be the data vector for a given n and ?. It is straightforward to show that this [ynC , y observation gives the following lower bound on the joint likelihood,   ? n , ?), ? ? = WC , WD , . . . , wD , ? ? = diag(?C , A?1 , . . . , A?1 ) p(? yn |zn ) = N (? yn |Wz W 1 Dd 1 Dd Given this pseudo observation, the computation of the posterior means mn and covariances Vn is similar to the Gaussian FA model as seen below. This result can be generalized to the mixture case in a straightforward way. The M-step is the same as in mixtures of Gaussian factor analyzers [GH96]. ?1 ? T? ? Vn = (W ? + ?z IL )?1 , W ?1 ? T? ? mn = V n W ?n y (13) The only question remaining is how to obtain the value of ?. By maximizing the lower bound, one ? n . This follows from the fact that the Bohning bound can show that the optimal value is ? n = Wm is tight for lse(?) when ? = ?, and that the curvature is independent of ? [Boh92]. We iterate this update until convergence. In practice, we find that the method usually converges in five or fewer iterations. The most attractive feature of the bound described above is its computational efficiency. To see this, note that the posterior covariance Vn does not in fact depend on n if the data vector is fully 4 observed, since A is a constant matrix. Consequently we need only invert Vn once outside the EM loop instead of N times, once for each data point. We will see in the next section that the other existing quadratic bounds do not have this property. To derive the overall computational cost of our ? n to be D and assume K = 1. Computing Vn EM algorithm, let us define the total dimension of y takes O(L3 + L2 D) time, and computing each mn takes O(L2 + LD) time. So the total cost of one E-step is O(L3 + L2 D + N I(L2 + LD)), where I is the number of variational updates. If there is missing data, Vn will change across data cases, so the total cost will be O(N I(L3 + L2 D)). 3.1 Comparison with Other Bounding Methods In the binary case, the Bohning bound reduces to the following: log(1 + e? ) ? 12 A? 2 ? b? ? + c? , where A = 1/4, b? = A? ? (1 + e?? )?1 , and c? = 12 A? 2 ? (1 + e?? )?1 ? + log(1 + e? ). It is interesting to compare this bound to Jaakkola?s bound [JJ96] used in [Tip98, YT04]. This bound can also be written in the quadratic form: log(1 + e? ) ? 12 A?? ? 2 ? ?b? ? + c?? , where A?? = 2?? , ?b? = ? 12 , 1 c?? = ??? ? 2 ? 21 ? + log(1 + e? ), ?? = 2? ( 1+e1?? ? 21 ). Although the Jaakkola bound is tighter than the Bohning bound, it has higher computational complexity. The reason is that the A?? parameter depends on ? and hence on n, which means we need to compute a different posterior covariance matrix for each n. Consequently, the cost of an E-step is O(N I(L3 + L2 D)), even if there is no missing data (note the L3 term inside the N I loop). To explore the speed vs accuracy trade-off, we use the synthetic binary data described in [MHG08] with N = 600, D = 16, and 10% missing data. We learn a binary FA model with L = 10, ?z = 1, and ?w = 0. We learn on the observed entries in the data matrix and compute the mean squared error (MSE) on the held out missing entries as in [MHG08]. We average the results over 20 repetitions of the experiment. We see in Figure 2 (top left) that the Jaakkola bound gives a lower MSE than Bohning?s bound in less time on this data. Next, we consider the case where the training data is fully observed using a modified version of the data generating procedure described in [MHG08]. We vary D from 16 to 128 while setting L = 0.25D and N = 10D. We sample L different binary prototypes at random, assign each data case to a prototype, and add 10% random binary noise. We measure the average time per iteration over 40 iterations of each method. Figure 2 (bottom left) shows that the Bohning bound exhibits much better scalability per iteration than the Jaakkola bound in this regime. The speed issue becomes more serious when combining binary variables with categorical variables. Firstly, there is no direct extension of the Jaakkola bound to the general categorical case. Hence, to combine categorical variables with binary variables, we can use the Jaakkola bound for binary and the Bohning for the rest. However, this is not computationally efficient as we need to compute the posterior covariance for each data point because of the Jaakkola bound. For computational simplicity, we use Bohning?s bound for both binary and categorical data. Various other bounds and approximations to the multinomial likelihood also exist; however, they are all more computationally intensive, and do not give an efficient variational algorithm. To the best of our knowledge these methods have not been applied to the FA model, but we describe them briefly for completeness. An extension of the Jaakkola bound to the multinomial case was given in [Bou07]. However, this tends to be less accurate than the Bohning bound. Another approach [BL06] PM is to use the concavity of the log function to write lse(?) ? ?(1 + j=1 exp(?j )) ? log ? ? 1, where ? is a variational parameter. This bound does not give closed form updates for the E and M steps so a numerical optimizer needs to be used (see [BL06] for details). Instead of using a bound, an alternative approach is to apply a quadratic approximation derived from a Taylor series expansion of the LSE function [AX07]. This provides a tighter approximation that could perform better than a bound, but one cannot make convergence guarantees when using it inside of EM. In practice we found this alternative approach to be very slow on the datasets that we consider. In view of its speed and simplicity, we will only consider the Bohning method for the remainder of the paper. 5 Accuracy vs Speed 0.1 FA?VM FA?MM FA?SS 0 10 1 10 10 Time (s) Scalability 0 Train MSE 2 1.5 1 0.5 Train Sensitivity: 10% Mis. 32 0 64 128 Data Dimension (D) Train Sensitivity: 50% Mis. 10 ?1 10 ?2 10 ?3 16 0 10 10 ?2 0 2 10 10 10 Prior Strength (?W) 10 FA?VM FA?VJM 2.5 Test Sensitivity: 50% Mis. ?1 10 ?2 0 2 10 10 10 Prior Strength (?W) 2 Train MSE 0 10 Time per Iteration (s) 1 10 ?1 0.05 0 Test Sensitivity: 10% Mis. MSE FA?VM FA?VJM FA?SS MSE 0.15 MSE 1 10 10 ?2 0 2 10 10 10 Prior Strength (? ) W ?1 10 ?2 10 ?3 10 ?2 0 2 10 10 10 Prior Strength (? ) W Figure 2: Top left: accuracy vs speed of variational EM with the Bohning bound (FA-VM), Jaakkola bound (FA-VJM) and HMC (FA-SS) on synthetic binary data. Bottom left: Time per iteration of EM with Bohning bound and Jaakkola bound as we vary D. Right: MSE vs ?w for FA-MM, FA-VM, and FA-SS on synthetic Gaussian data. We show results on the test and training sets, for 10% and 50% missing data. 4 Alternative Estimation Approaches In this section, we discuss several alternative methods for fitting the generalized FA model in the case K = 1, which we compare to the VM method. We defer comparisons of FA to mixture models to Section 5. 4.1 Maximize-Maximize (MM) Method The simplest approach to fit the FA model is to maximize log p(Y, Z, W|?w , ?z ) with respect to Z and W, the matrix of latent factor values and the factor loading matrix. It is straightforward to compute the gradient of the log posterior and apply a generic optimizer (we use the limited-memory quasi-newton method). Alternatively, one can use coordinate descent [CDS02]. We set the hyperparameters ?w and ?z by cross validation. To handle missing data, we simply evaluate the gradients by only summing over the observed entries of Y. At test time, consider a data vector consisting of missing and observed components, y? = [y?m , y?o ]. To fill in the missing entries, we compute ? to predict y?m . ? and use it with ? ?? = arg max p(z? , y?o |W) z The MM approach is simple and widely applicable, but these benefits come at the expense of ignoring the posterior variance of Z [WCS08]. This has negative consequences for the method in terms of sensitivity to the parameters ?w and ?z . To illustrate this effect, we generate a continuous dataset using D = 10, L = 5, and N = 200 data cases by sampling from the FA model. We set ?w = 1, ?z = 1, and ?c = 0.1. We standardize each data dimension to have unit variance and zero mean. We consider the case of 10% and 50% missing data. We evaluate the sensitivity of the methods to the setting of the posterior precision parameter ?w by varying it over the range 10?2 to 102 . We fix ?z = 1, since this is the standard assumption when fitting FA models. We run the methods on a random 50/50 train/test split. We train on the observed entries in the training set, and then compute MSE on the missing entries in the training and test sets. We average the results over 20 repetitions of the experiment. Figure 2 (top right) shows that the test MSE of the MM method is extremely sensitive to the prior precision ?w . We can see that this sensitivity increases as a function of the missing data rate. We hypothesize that this is a result of the MM method ignoring the posterior uncertainty in Z. 6 This is supported by looking at the MSE on the training set, Figure 2 (bottom right). We see that the MM method overfits when ?w is small. Consequently, MM requires a careful discrete search over the values of ?w , which is slow, since the quality of each such value must be estimated by cross-validation. By contrast, the VM method takes the posterior uncertainty about Z into account, resulting in almost no sensitivity to ?w over this range. Henceforth we set ?w = 0 for VM, meaning we are performing (approximate) maximum likelihood parameter estimation. 4.2 Sample-Sample (SS) Method An alternative to the MM approach is to sample both Z and W from their posteriors using Hamiltonian Monte Carlo (HMC) [MHG08]. We call this the ?SS? method, since we sample both Z and W. HMC leverages the fact that we can compute the gradient of the log posterior in closed form. However, it has several important parameters that must be set including the step size, the momentum distribution, the number of leapfrog steps, etc. To handle missing data, we can simply evaluate the gradients by only summing over the observed entries of Y. We do not need to impute the missing entries on the training set. At test time, we have a collection of samples of W. For each sample of W and each test case, we sample a set of z, and compute an averaged prediction for ym . In Figure 2 (right), we see that SS is insensitive to ?w , just like VM, since it also models posterior uncertainty in Z (note that the absolute MSE values are higher for SS than VM since for continuous data, VM corresponds to EM with an exact posterior). However, in Figure 2 (top left), we see that SS can be much slower than VM. In the remainder of the paper we focus on deterministic fitting methods only. 5 Experiments on Real Data In this section, we evaluate the performance of our model on real data with mixed continuous and discrete variables. We consider the following three cases of our model: (1) a model with latent factors but no mixtures (FA) (2) a model with mixtures but no latent factors (Mix) and (3) the general mixture of factor analyzers model (MixFA). To learn the FA model, we consider the FAMM and FA-VM approaches. For the Mix model, we use the standard EM algorithm. In the Mix model, continuous variables can be modeled with either a diagonal or a full covariance matrix. We refer to these two variants as Mix-Diag and Mix-Full. For MixFA model, we use the VM approach. This gives us five methods: FA-MM, FA-VM, MixFA, Mix-Full and Mix-Diag. We consider three real datasets of different sizes (see the table in Figure 3).2 For each dataset, we use 70% for training, 10% for validation and 20% for testing. We consider 20 splits for each dataset. We use the validation set to determine the number of latent factors and the number of mixtures (ranges shown in the table) with imputation error (described below) as our performance objective. For the FA-MM method, we set the values of the regularization parameters ?z and ?w by cross validation. We use the range {0.01, 0.1, 1, 10, 100} for both ?z and ?w . As VM is robust to the setting of these parameters, we set ?z = 1 and ?w = 0. One way to assess the performance of a generative model is to see how well it can impute missing data. We do this by randomly introducing missing values in the test data with a missing data rate of 0.3. For continuous variables, we compute the imputation MSE averaged over all the missing values (these variables are standardized beforehand). For discrete variables, we report the cross? , where p?m is the estimated probability entropy (averaged over missing values) defined as yT log p that y = m and y uses the one-of-(M + 1) encoding. These errors are shown in Figure 3 along with the running time for ASES dataset in the bottom right subfigure. We see that FA-VM consistently performs better than FA-MM for all the datasets. Moreover, because of the need for cross-validation, FA-MM takes more time than FA-VM. We also see that the Mix model, although faster, performs worse than FA-VM. Finally, as expected, MixFA generally performs slightly better than FA, but takes longer to run. 2 Adult and Auto are available in UCI repository, while ASES dataset is a subset of Asia-Europe Survey from www.icpsr.umich.edu 7 31 156 L 5, 13, 26 4, 15, 31 20, 40, 60, 80 K 1, 5, 10, 20 1, 5, 10, 20 1, 10, 20, 30, 40 0.3 0.2 0.9 4 10 2 10 0 0.8 10 Mix?Diag 26 MixFA D 1 Mix?Full 0 0.4 FA?VM 4 0.5 FA?MM 5 Time in sec Dc 0.4 0.4 Mix?Diag 156 MixFA 27 0.4 Mix?Full 21 0.5 FA?VM 42 ASES 0.6 FA?MM 5 Error Discrete 3 Error Continuous Dd P Md 0.5 Mix?Diag 16815 MixFA 45222 Mix?Full 392 FA?VM N FA?MM ASES Error Discrete Adult Error Continuous Auto Adult 0.6 Error Discrete Auto Dataset Details P Figure 3: Left: the table shows the details of each dataset used. Here D = Dc + Md is the total size of the data vector. L and K are the ranges of number of latent factors and mixture components used for cross validation. Note that the maximum value of L is D, as required by the FA model. Right: the figure shows the imputation error for each dataset for continuous and discrete variables. The bottom right subfigure shows the timing comparison for the ASES dataset. 6 Discussion and Future Work In this work we have proposed a new variational EM algorithm for fitting factor analysis models with mixed data. The algorithm is based on the Bohning bound, a simple quadratic bound to the log-sum-exp function. In the special case of fully observed binary data, the Bohning bound iteration is theoretically faster than Jaakkola?s bound iteration and we have demonstrated this advantage empirically. More importantly, the Bohning bound also easily extends to the categorical case. This enables, for the first time, an efficient variational method for fitting a factor analysis model to mixed continuous, binary, and categorical observations. In comparison to the maximize-maximize (MM) method, which forms the basis of ePCA and other matrix factorization methods, our variational EM method accounts for posterior uncertainty in the latent factors, leading to reduced sensitivity to hyper parameters. This has important practical consequences as the MM method requires extensive cross validation while our approach does not. We have compared a range of models and algorithms in terms of imputation performance on real data. This analysis shows that the cost of the cross validation search for MM is higher than the cost of fitting the FA model using our method. It also shows that standard alternatives to FA, such as finite mixture models, do not perform as well as FA. Finally, we show that the MixFA model can yield a performance improvement over a single FA model, although at a higher computational cost. We note that the quadratic bound that we study can be used in a variety of other models, such as linear-Gaussian state-space models with categorical observations [SH03]. It might be an interesting alternative to a Laplace approximation to the posterior, which is used in [KPBSK10, RMC09]. The bound might also be useful in the context of the correlated topic model [BL06, AX07], where similar variational EM methods have been applied. In the Bayesian statistics literature, it is common to use latent factor models combined with a probit observation model; this allows one to perform inference for the latent states using efficient auxiliary-variable MCMC techniques (see e.g., [HSC09, Dun07]). Additionally, the recently proposed Riemannian Manifold Hamiltonian Monte Carlo sampler [GCC09] may significantly speedup sampling-based approaches for mixed-data factor analysis models. We leave a comparison to these approaches to future work. Acknowledgments We would like to thank the reviewers for their helpful coments. This work was completed in part at the Xerox Research Center Europe and was supported by the Pacific Institute for the Mathematical Sciences and the Killam Trusts at the University of British Columbia. 8 References [AX07] A. Ahmed and E. Xing. On tight approximate inference of the logistic-normal topic admixture model. In AI/Statistics, 2007. + [BDdF 03] Kobus Barnard, Pinar Duygulu, Nando de Freitas, David Forsyth, David Blei, and Michael I. Jordan. Matching words and pictures. J. of Machine Learning Research, 3:1107?1135, 2003. [BJ04] W. Buntine and A. Jakulin. Applying Discrete PCA in Data Analysis. In UAI, 2004. [BL06] D. Blei and J. Lafferty. Correlated topic models. In NIPS, 2006. [BNJ03] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. J. of Machine Learning Research, 3:993?1022, 2003. [Boh92] D. Bohning. Multinomial logistic regression algorithm. Annals of the Inst. of Statistical Math., 44:197?200, 1992. [Bou07] G. Bouchard. Efficient bounds for the softmax and applications to approximate inference in hybrid models. In NIPS 2007 Workshop on Approximate Inference in Hybrid Models, 2007. [CDS02] M. Collins, S. Dasgupta, and R. E. Schapire. A generalization of principal components analysis to the exponential family. In NIPS-14, 2002. [Dun07] D. Dunson. Bayesian methods for latent trait modelling of longitudinal data. Stat. Methods Med. Res., 16(5):399?415, Oct 2007. [GCC09] M. Girolami, B. Calderhead, and S.A. Chin. Riemannian manifold hamiltonian monte carlo. Arxiv preprint arXiv:0907.1100, 2009. [GH96] Z. Ghahramani and G. Hinton. The EM algorithm for mixtures of factor analyzers. Technical report, Dept. of Comp. Sci., Uni. Toronto, 1996. [HSC09] P. R. Hahn, J. Scott, and C. Carvahlo. Sparse Factor-Analytic Probit Models. Technical report, Duke, 2009. [JJ96] T. Jaakkola and M. Jordan. A variational approach to Bayesian logistic regression problems and their extensions. In AI/Statistics, 1996. [KPBSK10] S. Koyama, L. Perez-Bolde, C. Shalizi, and R. Kass. Approximate methods for statespace models. Technical report, CMU, 2010. [LT10] J. Li and D. Tao. Simple exponential family PCA. In AI/Statistics, 2010. [MHG08] S. Mohamed, K. Heller, and Z. Ghahramani. Bayesian Exponential Family PCA. In NIPS, 2008. [RMC09] H. Rue, S. Martino, and N. Chopin. Approximate Bayesian Inference for Latent Gaussian Models Using Integrated Nested Laplace Approximations. J. of Royal Stat. Soc. Series B, 71:319?392, 2009. [Row97] S. Roweis. EM algorithms for PCA and SPCA. In NIPS, 1997. [SH03] V. Siivola and A. Honkela. A state-space method for language modeling. In Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 548?553, 2003. [TB99] M. Tipping and C. Bishop. Probabilistic principal component analysis. J. of Royal Stat. Soc. Series B, 21(3):611?622, 1999. [Tip98] M. Tipping. Probabilistic visualization of high-dimensional binary data. In NIPS, 1998. [WCS08] Max Welling, Chaitanya Chemudugunta, and Nathan Sutter. Deterministic latent variable models and their pitfalls. In Intl. Conf. on Data Mining, 2008. [WK01] Michel Wedel and Wagner Kamakura. Factor analysis with (mixed) observed and latent variables in the exponential family. Psychometrika, 66(4):515?530, December 2001. [YT04] K. Yu and V. Tresp. Heterogenous data fusion via a probabilistic latent-variable model. In Organic and Pervasive Computing (ARCS 2004), 2004. 9
3947 |@word repository:1 briefly:1 version:2 loading:7 covariance:8 ld:2 reduction:1 series:4 bc:3 longitudinal:1 existing:1 freitas:1 current:1 com:1 wd:3 ka:1 must:4 written:1 numerical:1 subsequent:1 enables:1 analytic:1 hypothesize:1 treating:1 update:4 v:4 generative:2 fewer:1 sutter:1 hamiltonian:4 blei:3 completeness:1 provides:1 math:1 toronto:1 firstly:1 five:2 mathematical:1 along:1 direct:1 multinomially:2 consists:2 fitting:12 combine:1 inside:2 theoretically:1 expected:2 pitfall:1 becomes:1 begin:2 estimating:1 notation:1 moreover:1 psychometrika:1 marlin:1 guarantee:1 pseudo:2 rm:2 murphyk:1 unit:1 yn:10 before:1 positive:1 timing:1 w1k:1 tends:1 consequence:2 jakulin:1 encoding:2 subscript:2 yd:2 might:2 studied:1 specifying:1 challenging:1 factorization:2 limited:1 range:6 averaged:3 practical:1 acknowledgment:1 testing:1 practice:2 definite:1 bolde:1 procedure:1 significantly:4 matching:1 organic:1 word:1 integrating:1 cannot:1 context:1 applying:2 www:1 equivalent:1 deterministic:2 demonstrated:1 center:2 missing:23 maximizing:2 straightforward:3 yt:1 reviewer:1 survey:1 simplicity:4 importantly:1 fill:1 handle:2 coordinate:1 laplace:2 annals:1 exact:1 duke:1 us:1 element:3 standardize:1 recognition:1 wkc:5 v6t:3 observed:12 bottom:5 preprint:1 trade:1 bnj03:2 mentioned:1 benjamin:1 complexity:1 depend:2 tight:3 calderhead:1 efficiency:1 basis:1 vague:1 easily:3 joint:1 represented:1 various:1 train:6 wdk:5 describe:3 monte:5 kevin:1 outside:1 hyper:1 widely:1 valued:1 s:12 statistic:4 itself:1 superscript:1 advantage:2 propose:4 lowdimensional:1 remainder:2 uci:1 loop:2 combining:1 roweis:1 description:1 asserts:1 scalability:2 convergence:2 intl:1 generating:1 leave:2 converges:1 derive:1 illustrate:1 stat:3 icpsr:1 soc:2 auxiliary:1 c:3 indicate:3 come:1 girolami:1 differ:1 drawback:1 nando:1 viewing:1 require:1 assign:1 shalizi:1 fix:1 generalization:1 kobus:1 tighter:2 im:2 extension:3 killam:1 mm:22 around:1 normal:1 exp:7 predict:1 substituting:1 vary:2 optimizer:2 estimation:3 proc:1 applicable:1 sensitive:1 repetition:2 gaussian:17 modified:1 ck:2 varying:1 jaakkola:13 pervasive:1 derived:1 focus:3 leapfrog:1 notational:1 consistently:1 bernoulli:1 likelihood:11 improvement:1 modelling:1 martino:1 contrast:1 helpful:1 inference:5 inst:1 bt:1 integrated:1 quasi:1 france:1 chopin:1 tao:1 overall:1 issue:1 arg:1 development:1 special:6 softmax:4 marginal:1 equal:1 once:2 ng:1 sampling:5 yu:1 unsupervised:1 future:3 report:4 serious:1 randomly:1 preserve:2 simultaneously:1 murphy:1 consisting:1 mining:1 mixture:30 perez:1 behind:1 held:1 chain:1 accurate:1 beforehand:1 orthogonal:1 unless:1 taylor:2 chaitanya:1 re:1 subfigure:2 modeling:1 zn:26 maximization:2 tractability:1 cost:8 introducing:1 subset:1 entry:8 buntine:1 dependency:1 synthetic:4 combined:1 density:1 sensitivity:9 probabilistic:6 vm:23 off:1 michael:1 ym:1 w1:1 squared:1 w2k:1 henceforth:1 worse:1 conf:1 leading:1 michel:1 li:1 account:2 converted:1 de:1 sec:1 subsumes:1 wk:3 forsyth:1 depends:1 view:1 ynd:8 closed:4 overfits:1 start:1 recover:1 wm:1 xing:1 bouchard:3 identifiability:1 defer:1 contribution:1 ass:1 il:2 square:1 accuracy:3 variance:3 emtiyaz:2 yield:1 bayesian:6 carlo:5 comp:1 mohamed:1 associated:1 mi:4 riemannian:2 sampled:1 ppca:2 dataset:9 knowledge:1 dimensionality:1 gh96:2 carefully:1 appears:1 higher:5 tipping:2 asia:1 specify:1 just:1 until:1 honkela:1 replacing:1 trust:1 logistic:3 lda:2 quality:1 name:1 effect:1 requiring:1 hence:2 regularization:1 yn1:1 attractive:1 impute:2 generalized:7 chin:1 complete:5 mohammad:1 performs:3 lse:11 meaning:1 variational:19 recently:2 common:1 multinomial:8 rl:2 empirically:1 insensitive:1 trait:1 refer:5 measurement:1 ai:3 rd:1 automatic:1 pm:1 z4:3 analyzer:9 language:1 l3:5 specification:1 europe:3 longer:1 etc:1 add:1 curvature:1 posterior:22 binary:15 seen:2 ndk:3 determine:1 maximize:7 rdc:2 full:6 desirable:1 mix:14 reduces:1 technical:3 faster:3 ahmed:1 cross:8 e1:1 prediction:1 underlies:1 variant:1 regression:2 expectation:2 cmu:1 arxiv:2 iteration:9 invert:1 rest:2 unlike:1 med:1 member:1 december:1 lafferty:1 jordan:3 call:2 presence:1 leverage:1 spca:1 split:2 iterate:1 variety:1 fit:2 opposite:1 idea:1 tm:1 prototype:2 intensive:1 pca:10 speech:1 passing:1 hessian:1 useful:2 generally:1 statespace:1 simplest:2 reduced:1 generate:1 schapire:1 exist:1 canonical:1 estimated:2 per:5 chemudugunta:1 discrete:30 write:1 dasgupta:1 pinar:1 key:2 demonstrating:1 epca:2 imputation:5 clarity:1 sum:3 enforced:1 run:2 uncertainty:7 extends:1 family:8 almost:1 vn:6 bound:57 completing:1 quadratic:9 strength:4 wc:1 nathan:1 speed:5 extremely:2 concluding:1 duygulu:1 performing:1 estep:1 speedup:1 pacific:1 xerox:3 conjugate:3 across:1 slightly:1 em:19 s1:1 glm:1 computationally:5 equation:5 conjugacy:1 previously:2 visualization:1 discus:2 end:1 umich:1 available:2 apply:2 generic:1 alternative:8 slower:1 denotes:2 remaining:3 dirichlet:2 ensure:1 top:4 graphical:2 standardized:1 running:1 newton:1 completed:1 ghahramani:2 hahn:1 objective:1 question:1 fa:50 md:4 diagonal:2 exhibit:1 gradient:4 subspace:2 thank:1 mapped:1 sci:1 koyama:1 topic:3 manifold:2 evaluate:4 trivial:1 reason:1 assuming:1 length:2 index:3 modeled:2 hmc:4 dunson:1 expense:1 negative:1 suppress:2 perform:3 upper:1 observation:11 markov:1 sm:4 datasets:3 finite:1 arc:1 descent:1 extended:3 looking:1 hinton:1 dc:4 canada:3 introduced:1 david:2 namely:1 required:1 khan:1 optimized:1 extensive:1 heterogenous:1 nip:6 adult:3 dth:2 below:4 usually:1 scott:1 regime:1 including:2 wz:2 memory:1 max:2 royal:2 natural:5 difficulty:1 regularized:1 vjm:3 indicator:4 hybrid:2 mn:3 picture:1 admixture:1 categorical:9 auto:3 columbia:4 tresp:1 prior:12 literature:1 discovery:1 l2:6 heller:1 vancouver:3 marginalizing:1 relative:1 understanding:1 fully:5 probit:2 highlight:1 mixed:13 interesting:2 allocation:2 validation:9 integrate:2 dd:8 row:2 supported:2 last:2 free:1 bohning:18 institute:1 ync:7 wagner:1 absolute:1 sparse:1 benefit:2 distributed:4 overcome:1 dimension:5 qn:9 ignores:1 concavity:1 commonly:1 collection:1 exponentiating:1 welling:1 approximate:7 uni:1 wedel:1 uai:1 summing:2 alternatively:1 spectrum:1 continuous:25 latent:30 search:2 table:3 additionally:1 learn:3 robust:4 ca:3 ignoring:2 expansion:3 mse:13 as:5 rue:1 diag:6 main:2 bounding:1 noise:3 hyperparameters:1 slow:2 precision:3 momentum:1 exponential:8 wzn:2 british:4 bishop:1 offset:6 dk:6 concern:1 fusion:1 intractable:1 workshop:2 entropy:1 simply:2 explore:1 ubc:3 corresponds:1 nested:1 oct:1 identity:1 consequently:3 careful:1 asru:1 barnard:1 change:1 except:1 sampler:1 principal:5 total:4 guillaume:2 latter:1 arises:1 collins:1 dept:1 mcmc:1 correlated:2
3,254
3,948
Functional form of motion priors in human motion perception Hongjing Lu 1,2 [email protected] Tungyou Lin 3 [email protected] Alan L. F. Lee 1 [email protected] Luminita Vese 3 [email protected] Alan Yuille 1,2,4 [email protected] Department of Psychology1, Statistics2 , Mathematics3 and Computer Science4 , UCLA Abstract It has been speculated that the human motion system combines noisy measurements with prior expectations in an optimal, or rational, manner. The basic goal of our work is to discover experimentally which prior distribution is used. More specifically, we seek to infer the functional form of the motion prior from the performance of human subjects on motion estimation tasks. We restricted ourselves to priors which combine three terms for motion slowness, first-order smoothness, and second-order smoothness. We focused on two functional forms for prior distributions: L2-norm and L1-norm regularization corresponding to the Gaussian and Laplace distributions respectively. In our first experimental session we estimate the weights of the three terms for each functional form to maximize the fit to human performance. We then measured human performance for motion tasks and found that we obtained better fit for the L1-norm (Laplace) than for the L2-norm (Gaussian). We note that the L1-norm is also a better fit to the statistics of motion in natural environments. In addition, we found large weights for the second-order smoothness term, indicating the importance of high-order smoothness compared to slowness and lower-order smoothness. To validate our results further, we used the best fit models using the L1-norm to predict human performance in a second session with different experimental setups. Our results showed excellent agreement between human performance and model prediction ? ranging from 3% to 8% for five human subjects over ten experimental conditions ? and give further support that the human visual system uses an L1-norm (Laplace) prior. 1 Introduction Imagine that you are traveling in a moving car and observe a walker through a fence full of punch holes. Your visual system can readily perceive the walking person against the apparently moving background using only the motion signals visible through these holes. But this task is far from trivial due to the inherent local ambiguity of motion stimuli, often referred to as the aperture problem. More precisely, if you view a line segment through an aperture then you can easily estimate the motion component normal to the line but it is impossible to estimate the tangential component. So there are an infinite number of possible interpretations of the local motion signal. One way to overcome this local ambiguity is to integrate local motion measurements across space to infer the ?true? motion field. Physiological studies have shown that direction-selective neurons 1 in primary visual cortex perform local measurements of motion. Then the visual system integrates these local motion measurements to form global motion perception [4, 5]. Psychophysicists have identified a variety of phenomena, such as motion capture and motion cooperativity, which appear to be consequences of motion spatial integration [1, 2, 3]. From the computational perspective, a number of Bayesian models have been proposed to explain these effects by hypothesizing prior assumptions about the motion fields that occur in natural environments. In particular, it has been shown that a prior which is biased to slow-and-smooth motion can account for a range of experimental results [6, 7, 8, 9, 10]. But although evidence from physiology and psychophysics supports the existence of an integration stage, it remains unclear exactly what motion priors are used to resolve the measurement ambiguities. In the walking example described above (see figure 1), the visual system needs to integrate the local measurements in the two regions within the red boxes in order to perceive a coherently moving background. This integration must be performed over large distances, because the regions are widely separated, but this integration cannot be extended to include the walker region highlighted in the blue box, because this would interfere with accurate estimation of the walker?s movements. Hence the motion priors used by the human visual system must have a functional form which enables flexible and robust integration. We aim to determine the functional form of the motion priors which underly human perception, and to validate how well these priors can influence human perception in various motion tasks. Our approach is to combine parametric modeling of the motion priors with psychophysical experiments to estimate the model parameters that provide the best fit to human performance across a range of stimulus conditions. To provide further validation, we then use the estimated model to predict human performance in several different experimental setups. In this paper, we first introduce the two functional forms which we consider and review related literature in Section 2. Then in Section 3 we present our computational theory and implementation details. In Section 4 we test the theory by comparing its predictions with human performance in a range of psychophysical experiments. Figure 1: Observing a walker with a moving camera. Left panel, two example frames. The visual system needs to integrate motion measurements from the two regions in the red boxes in order to perceive the motion of the background. But this integration should not be extended to the walker region highlighted in the blue box. Right panel, the integration task is made harder by observing the scene through a set of punch holes. The experimental stimuli in our psychophysical experiments are designed to mimic these observation conditions. 2 Functional form of motion priors Many models have proposed that the human visual system uses prior knowledge of probable motions, but the functional form for this prior remains unclear. For example, several well-established computational models employ Gaussian priors to encode the bias towards slow and spatially smooth motion fields. But the choice of Gaussian distributions has largely been based on computational convenience [6, 8], because they enable us to derive analytic solutions. However, some evidence suggests that different distribution forms may be used by the human visual system. Researchers have used motion sequences in real scenes to measure the spatial and temporal statistics of motion fields [11, 12]. These natural statistics show that the magnitude of the motion (speed) falls off in a manner similar to a Laplacian distribution ( L1-norm regularization), which has heavier tails than Gaussian distributions (see the left plot in figure 2). These heavy tails indicates that while slow motions are very common, fast motions are still occur fairly frequently in natural 2 environments. A similar distribution pattern was also found for spatial derivatives of the motion flow, showing that non-smooth motion fields can also happen in natural environments. This statistical finding is not surprising since motion discontinuities can arise in the natural environment due to the relative motion of objects, foreground/background segmentation, and occlusion. Stocker and Simoncelli [10] conducted a pioneering study to infer the functional form of the slowness motion prior. More specifically, they used human subject responses in a speed discrimination task to infer the shape of the slowness prior distribution. Their inferred slowness prior showed significantly heavier tails than a Gaussian distribution. They showed that a motion model using this inferred prior provided an adequate fit to human data for a wide range of stimuli. Finally, the robustness of the L1-norm has also been demonstrated in many statistical applications (e.g., regression and feature selection). In the simplest case of linear regression, suppose we want to find the intercept with the constraint of zero slope. The regression with L1-norm regularization estimates the intercept based on the sample median, whereas the L2-norm regression estimates the intercept based on the sample mean. A single outlier has very little effect on the median but can alter the mean significantly. Accordingly, the L1-norm regularization is less sensitive to outliers than is the L2-norm. We illustrate this for motion estimation by the example in the right panel of figure 2. If there is a motion boundary in the true motion field, then a model using L2-norm regularization (Gaussian priors) tends to impose strong smoothing over the two distinct motion fields which blurs the motion across discontinuity. But the model with an L1-norm (Laplace prior) preserves the motion discontinuity and gives smooth motion flow on both sides of it. Figure 2: Left plot, the Gaussian distribution (L2-norm regularization) and the Laplace distribution (L1-norm regularization). Right plot, an illustration of over-smoothing caused by using Gaussian priors. 3 Mathematical Model The input data is specified by local motion measurements ~rq , of form ~uq = (u1 q , u2 q ), at a discrete set of positions ~rq , q = 1, ..., N in the image plane. The goal is to find a smooth motion field ~v defined at all positions ~r in the image domain, estimated from the local motion measurements. The motion field ~v can be thought of as an interpolation of the data which obeys a slowness and smoothness prior and which agrees approximately with the local motion measurements. Recall that u ~ the visual system can only observe the local motion in the directions ~nq = |~uqq | (sometimes called component motion) because of the aperture problem. Hence approximate agreement with local measurements reduces to the constraints: ~v (~rq ) ? ~nq ? ~uq ? ~nq ? 0. As illustrated in figure 3, we consider three motion prior terms which quantify the preference for slowness, first-order smoothness and second-order smoothness respectively. Let ? denote the image domain ? i.e. the set of points ~r = (r1 , r2 ) ? ?. We define the prior to be a Gibbs distribution with energy function of form: Z ? ? ? E(~v ) = ( |~v |? + |?~v |? + |4~v |? )d~r, ? ? ? ? 3 where ?, ?, ?, ?, ?, ? are positive parameters and r   2  2  2 2 p ?v1 ?v1 2 2 2 2 + + ?v + ?v , |~v | = (v1 ) + (v2 ) , |?~v | = ?r1 ?r2 ?r1 ?r2 r        |4~v | = ? 2 v1 ?r12 2 + ? 2 v1 ?r22 2 + ? 2 v2 ?r12 2 + ? 2 v2 ?r22 2 . Figure 3: An illustration of three prior terms: (i) slowness, (ii) first-order smoothness, and (iii) second-order smoothness The (negative log) likelihood function for grating stimuli imposes the measurement constraints and is of form: N N X X E(~u|~v ) = |~v (~rq ) ? ~nq ? ~uq ? ~nq |p = |~v (~rq ) ? ~nq ? |~uq ||p . q=1 q=1 The combined energy function to be minimized is: o n c inf F (~v ) = E(~u|~v ) + E(~v ) . ~ v p This energy is a convex function provided the exponents satisfy ?, ?, ?, p ? 1. Therefore the energy minimum can be found by imposing the first order optimality conditions, ?F?~v(~v) = 0 (the EulerLagrange equations). Below we computer these Euler-Lagrange partial differential equations in ~v = (v1 , v2 ). We fix the likelihood term by setting p = 2 (the exponent of the likelihood term). If ?, ?, ? 6= 2, the Euler-Lagrange equations are non-linear partial differential equations (PDEs) and explicit solutions cannot be found (if ?, ?, ? = 2 the Euler-Lagrange equations will be linear and so can be solved by Fourier transforms or Green?s functions, as previously done in [6]). To solve these non-linear PDEs we discretize them by finite differences and use iterative gradient descent (i.e. we (~ r ,t) r ,t)) apply the dynamics ?~v?t = ? ?F?~v(~v(~(~ r,t) until we reach a fixed state). More precisely, we initialize ~v (~r, 0) at random, and solve the update equation for t > 0:     ?vk (~r, t) = ??|~v |??2 vk + ?div |?~v |??2 ?vk ? ?4 |4~v |??2 4vk ?t p?1  nk q ?~r,~rq , ? c ~v (~rq ) ? ~nq ? ~uq ? n~q where k = 1, 2, ?~r,~rq = 1 if ~r = ~rq and ?~r,~rq = 0 if ~r 6= ~rq . Since the powers ? ? 2, ? ? 2, ? ? 2 become negative when the positive exponents ?, ?, ... take value 1, we include a small  = 10?6 inside the square roots to avoid division by zero (when calculating terms like |.|). The algorithm stops when the difference between two consecutive energy estimates is close to zero (i.e. the stopping criterion is based on thresholding the energy change). ~ (l) = Our implementation discretized the Euler-Lagrange equations, as specified below. Let B (l) ??2 ~ (l) (l) ??2 ~ (l) (l) ??2 |?~v | , C = |4~v | , A = |~v | , where l denotes time discretization with 4t the time-step, and (i, j) denotes space discretization with h = 4r1 = 4r2 being the space-step. Then the above PDE?s can be discretized as (l+1) vk i,j (l) ? vk i,j 4t (l) ~ i,j vk (l+1) = F idk i,j ? ?A i,j 4 ? ~ (l) )vk (l+1) ~ (l) ? 2B ~ (l) ? B [(?B i,j i,j i?1,j i,j?1 h2 (l) (l) (l) (l) (l) (l) (l) ~ (l) ~ ~ ~ vk + B i+1,j + Bi?1,j vk i?1,j + Bi,j vk i,j+1 + Bi,j?1 vk i,j?1 ] i,j ? ~ (l) ~ (l) )vk (l+1) ~ (l) + C ~ (l) + C ~ (l) + 16C ? {(Ci+1,j + C i,j i,j?1 i,j+1 i,j i?1,j h4 (l) (l) (l) (l) (l) (l) ~ ~ ~ ~ ? 4[(C i+1,j + Ci,j )vk i+1,j + (Ci?1,j + Ci,j )vk i?1,j + (l) (l) (l) (l) (l) (l) ~ ~ ~ ~ + (C i,j+1 + Ci,j )vk i,j+1 + (Ci,j?1 + Ci,j )vk i,j?1 ] (l) ~ (l) ~ (l) ~ (l) )vk (l) ~ (l) + C + (C i+1,j+1 + (Ci+1,j + Ci,j?1 )vk i+1,j?1 i,j+1 i+1,j (l) (l) (l) (l) (l) (l) ~ ~ ~ ~ + (C i?1,j + Ci,j+1 )vk i?1,j+1 + (Ci?1,j + Ci,j?1 )vk i?1,j?1 where F idk i,j ~ (l) vk (l) } ~ (l) vk (l) + C ~ (l) vk (l) + C ~ (l) vk (l) + C + C i,j?1 i,j?2 i,j+1 i,j+2 i?2,j i?1,j i+2,j i+1,j ( p?1  nkq if ~rq = (i, j) . Letting ?c ~v (~rq ) ? ~nq ? ~uq ? n~q = 0 otherwise ~ i,j = B ~ i,j?1 + B ~ i?1,j + 2B ~ i,j , E2 ~ i,j = C ~ i+1,j + C ~ i?1,j + 16C ~ i,j + C ~ i,j+1 + C ~ i,j?1 , E1 ~ i,j = C ~ i+1,j + C ~ i,j , E4 ~ i,j = C ~ i?1,j + C ~ i,j , E5 ~ i,j = C ~ i,j+1 + C ~ i,j , E6 ~ i,j = C ~ i,j?1 + C ~ i,j , E3 ~ i,j = C ~ i+1,j + C ~ i,j+1 , E8 ~ i,j = C ~ i+1,j + C ~ i,j?1 , E9 ~ i,j = C ~ i?1,j + C ~ i,j+1 , E7 ? ? ~ i,j = C ~ i?1,j + C ~ i,j?1 , E11 ~ = 1/(1 + 4t(?A ~ + 2 E1 ~ + 4 E2)), ~ E10 h h we can solve for v (l+1) and we obtain  ? ~ (l) (l) (l) (l) (l) (l+1) (l) (l) ~ (l) ~ (l) (l) ~ (l) = E11i,j vk i,j + 4t{F idk i,j + 2 (B vk i,j i,j vk i+1,j + Bi?1,j vk i?1,j + Bi,j vk i,j+1 + Bi,j?1 vk i,j?1 ) h ? (l) ~ (l) (l) ~ (l) (l) ~ (l) (l) ~ (l) [?4(E3 ? i,j vk i+1,j + E4i,j vk i?1,j + E5i,j vk i,j+1 + E6i,j vk i,j?1 ) 4 h ~ (l) (l) ~ (l) (l) ~ (l) (l) ~ (l) vk (l) + E7 i,j i+1,j+1 + E8i,j vk i+1,j?1 + E9i,j vk i?1,j+1 + E10i,j vk i?1,j?1  ~ (l) vk (l) ]} . ~ (l) vk (l) + C ~ (l) vk (l) + C ~ (l) vk (l) + C + C i,j?1 i,j?2 i,j+1 i,j+2 i?2,j i?1,j i+2,j i+1,j 4 Experiments We compared two possible functional forms for the motion prior: (1) the Laplace distribution with L1-norm regularization, with ? = ? = ? = 1, (2) the Gaussian distribution with L2-norm regularization, with ? = ? = ? = 2. Since the main goal of this work is to discover motion priors, we employed the same likelihood term with p = 2 for both models. We used the performance of human subjects in the first experimental session to estimate the weights of the three prior terms, ?, ?, ?, for each functional form. We then validated the predictions of the model by comparing them with human performance in a second experimental session which uses different stimulus parameters. 4.1 Stimulus We used a multiple-aperture stimulus [13] which consists of 12 by 12 drifting sine-wave gratings within a square window subtending 8? . Each element (0.5? ) was composed of an oriented sinusoidal grating of 5.6 cycles/deg spatial frequency, which was within a stationary Gaussian window. The contrast of the elements was 0.2. The motion stimulus included 20 time frames which were presented within 267 ms. The global motion stimulus was generated as follows. First, the orientation of each local grating element was randomly determined. Second, a global motion (also called 2D motion, with the speed of 1 deg/sec) direction was chosen. Third, a certain proportion of elements (signal elements) were assigned with the predetermined 2D motion , while each of the remaining elements (noise elements) was assigned a random 2D motion. Finally, with its orientation and 2D motion velocity, the drifting speed for each element was computed so that the local (or component) drifting velocity was consistent with the assigned 2D motion velocity. As shown in figure 4 the global motion strength was controlled by varying the proportion of signal elements in the stimulus (i.e., the 5 coherence ratio). Stimuli with high ratio exhibited more coherent motion, and stimuli with low ratio exhibited more random motion. In all the experiments reported in this paper, each participant completed two experiment sessions with different stimulus parameters. The goal of session 1 was parameter estimation: to estimate the weights of the three prior terms ? slowness, first-order smoothness and second-order smoothness, ? for each model. Session 2 was for model validation: using the weights estimated from session 1 to predict subject performance for different experimental conditions. Figure 4: Stimulus illustration. Multiple-aperture stimuli with coherence ratio of 0, 0.4, 0.8 and 1 from left to right. the blue and green arrows indicate the 2D motion directions assigned for signal and noise elements, respectively. 4.2 Experiment 1 4.2.1 Procedure There were two separate sessions in Experiment 1. On each trial of the first session, observers were presented with two motion patterns, one after another. The first one was the reference motion pattern, which always moved upward (0 degree), and the second one was the test motion pattern, whose global motion direction was either tilted towards the left or the right relative to the reference pattern. Both patterns lasted for 267 ms with 500 ms inter-stimulus interval. The observer?s task was to determine whether the global motion direction of the test pattern was more towards the left or right relative to the reference pattern. In order to make sure observers understood the task and were able to perceive the global motion, before the beginning of the first session, observers passed a test session in which they achieved 90% accuracy in 40 consecutive trials with 80% coherence and 20 (or 45) degrees of angular difference. To allow observers to familiarize themselves with the task, before each experimental session observers went through a practice session with 10 blocks of 25 trials. The first session consisted of 20 blocks of 50 trials. the coherence ratio was constant within each block. The observer?s discrimination performance was measured for ten coherence ratios (0, 0.1, 0.2, .., 0.9) in the first session. The angular difference between the reference and test motion was fixed for each observer in the entire session (2 degrees for observers AL, MW and AE; 45 degrees for OQ and CC). The second session was identical to the first one, except that the coherence ratio was fixed at 0.7, and the angular difference between the global motion directions of the reference and the test patterns was varied across blocks (ten angular differences: 1, 5, 10, .., 45 degrees). 4.2.2 Results We implemented motion models with the Laplace prior distribution (termed ?L1 model?) and the Gaussian prior (termed ?L2 model?). As the first step, exhaustive search was conducted to find a set of weights for the prior terms that provided the best fit to the human psychometric performance in experimental session 1. Table 1 reports the estimated parameters for each individual subject using the L1 and L2 models. There was clear individual difference for the estimated weight values. However, across all five subjects, large weight values were found for the second-order smoothness terms, indicating the contribution from higher-order smoothness preference is important in perceiving global motion from multiple-aperture stimulus. Figure 5 shows the results from each individual participant and best-fitting model performance. The results clearly show the L1 model provided the better fit to human data when compared to the L2 model. In general,humans appear to be sensitive to the inclusion of noise elements, and perform 6 Table 1: Estimated weights ?, ?, ? of slowness, first-order smoothness and second-order smoothness prior terms, for L1 and L2-norm model Subjects L1 ? L1 ? L1 ? L2 ? L2 ? L2 ? AE AL CC MW OQ 0.001 0.01 0.001 0.001 0.01 1 100 0.1 10 100 15000 16000 16000 17000 18000 0.01 0.01 0.001 0.01 0.01 100 1 0.1 1 100 16000 16000 16000 20000 18000 worse than the L2 model, which tends to strongly encourage smoothness over the entire display window. In experimental session 2, the two models predicted performance as a function of angular difference between the reference motion and the test motion. As shown in figure 7, the L1 model yielded less error in fitting human performance than did the L2 model. This result illustrates the power of the L1 model in predicting human performance in motion tasks different from the tasks used for estimating model parameters. AE CC 1 1 0.14 L1 model L2 model 0.7 0.6 Human L1 model L2 model 0.5 0.8 0.7 Human L1 model L2 model 0.6 0.5 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Model error 0.12 0.8 0.4 0.16 0.9 Accuracy Accuracy 0.9 0.06 0.04 0.02 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Coherence ratio 0.1 0.08 0 Coherence ratio AL AE CC MW OQ Figure 5: Comparison between human performance and model predictions in session 1. Left two plots, accuracy as a function of coherence ratio for two representative subjects. Blue solid lines indicate human performance. Red and green dashed lines indicate L1 and L2 model predictions with the best fitted parameters. Right plot, model error for all five subjects. The model error was computed as the mean absolute difference between human performance and model predictions. L1 model consistently fits human performance better than L2 model for all subjects AE CC 1 1 0.6 Human L1 model L2 model 0.5 1 5 10 15 20 25 30 35 40 Angular difference (degree) 0.8 0.7 Human L1 model L2 model 0.6 45 0.5 L1 model L2 model 0.2 Model error 0.7 0.4 0.25 0.9 0.8 Accuracy Accuracy 0.9 1 5 10 15 20 25 30 35 40 Angular difference (degree) 0.15 0.1 0.05 45 0 AL AE CC MW OQ Figure 6: Comparison between human performance and model predictions in session 1. Left two plots, accuracy as a function of angular difference between the reference and the test motion for two representative subjects. Blue solid lines indicate human performance. Red and Green dashed lines indicate L1 and L2 model predictions. Right plot, model error for all five subjects. Less errors from L1 model indicate that L1 model consistently fits human performance better than L2 model for all subjects 4.3 Experiment 2 The results of Experiment 1 clearly support the conclusion that the motion model with Laplace prior (L1-norm regularization) fits human performance better than does the model with Gaussian prior 7 (L2 model). In Experiment 2, we compared human motion judgment with predictions of the L1 model on each trial, rather than using the average performance as in Experiment 1. Such a detailed comparison can provide quantitative measures of how well the L1 model is able to predict human motion judgment for specific stimuli. In Experiment 2, the first session was identical to that in Experiment 1, in which angular difference in the two global motion directions were fixed (45 degrees for all observers) while the coherence ratio was varied. In the second session, observers were presented with one motion stimulus on each trial. The global motion direction of the pattern was randomly selected from 24 possible directions (with a 15-degree difference between two adjacent directions). Observers reported their perceived global motion directions by rotating a line after the motion stimulus disappeared from the screen. The experiment included 12 blocks (each with 48 trials) and six coherence ratios (0, 0.1, 0.3, .., 0.9). A two-pass design was used to let each observer run the identical session twice in order to measure the reliability of the observer?s judgments. We used human performance in session 1 to estimate model parameters: weights ?, ?, ? for slowness, first-order smoothness and second-order smoothness prior terms for each individual participant. Since identical stimuli were used in the two runs of session 2, we can quantify the reliability of the observer?s judgment by computing the response correlation across trials in these two runs. As shown in the left plot of figure 7, human observers? responses were significantly correlated in the two runs, even in the condition of random motion (coherence ratio is close to 0). The correlated responses in these subthreshold conditions suggest that human observers are able to provide consistent interpretation of motion flow, even when the motion is random. The right plot of figure 7 shows the trial-by-trial correlation between human motion judgments with model-predicted global motion direction. The model-human correlations were comparable to human self-correlations. Even in the random motion condition (where the coherence ratio is 0), the correlation between the model and human judgments is greater than 0.5, indicating the predictive power of the model. We also noticed that the correlation between human and L2 model was around 8 percent worse than the human self-correlation and the correlation between the L1 model and humans. This finding further demonstrated that the L1 model provided a better fit to human data than did the L2 model. 1 Human?Model correlation Human?self correlation 1 0.9 0.8 0.7 0.6 AP MS SG XD 0.5 0.4 0.3 0.9 0.8 0.7 0.6 0.4 0.3 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 AP MS SG XD 0.5 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Coherence ratio Coherence ratio Figure 7: Comparison between human performance and model predictions using trial-by-trial correlation. Left plot, human self correlation between two runs of identical experimental sessions. Right plot, correlation between human motion judgement and model predicted global motion direction. The significant correlation between human and the model indicates the L1 model is able to predict human motion judgment for specific stimuli, even in the random display, i.e., coherence ratio close to 0. 5 Conclusions We found that a motion prior in the form of the Laplace distribution with L1-norm regularization provided significantly better agreement with human performance than did Gaussian priors with L2norm. We also showed that humans weighted second-order motion smoothness much higher than first-order smoothness and slowness. Furthermore, model predictions using this Laplace prior were consistent with human perception of coherent motion, even for random displays. Overall our results suggest that human motion perception for these types of stimuli can be well modeled using Laplace priors. Acknowledgments This research was supported by NSF grants IIS-613563 to AY and BCS-0843880 to HL. 8 References [1] R. Sekuler, S.N.J. Watamaniuk and R. Blake. Perception of Visual Motion. In Steven?s Handbook of Experimental Psychology. Third edition. H. Pashler, series editor. S. Yantis, volume editor. J. Wiley Publishers. New York. 2002. [2] L. Welch. The perception of moving plaids revewals two processing stages. Nature,337,734736. 1989. [3] P. Schrater, D. Knill and E. Simoncelli. Mechanisms of visual motion detection. Nature Neuroscience, 3, 64-68. 2000. [4] J. A. Movhson and W. T. Newsome. Visual response properties of striate cortical neurons projecting to area MT in macaque monkeys. Visual Neuroscience, 16, 7733-7741. 1996. [5] N. C. Rust, V. Mante, E. P. Simoncelli and J. A. Movshon. How MT cells analyze the motion of visual patterns. Nature Neuroscience, 9(11), 1421-1431. 2006. [6] A.L. Yuille and N.M. Grzywacz. A computational theory for the perception of coherent visual motion. Nature, 333,71-74. 1988. [7] A.L. Yuille and N.M. Grzywacz. A Mathematical Analysis of the Motion Coherence Theory. International Journal of Computer Vision. 3. pp 155-175. 1989. [8] Y. Weiss, E.P. Simoncelli, and E.H. Adelson. Motion illusions as optimal percepts. Nature Neuroscience, 5, 598-604. 2002. [9] H. Lu and A.L. Yuille. Ideal Observers for Detecting Motion: Correspondence Noise. Advances in Neural Information Processing Systems 7, pp. 827-834. 2005. [10] A.A. Stocker and E.P. Simoncelli. Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience, 9(4), pp. 578-585, 2006. [11] S. Roth and M. J. Black. On the spatial statistics of optical flow. International Journal of Computer Vision, 74(1), pp. 33-50, 2007. [12] C. Liu, W. T. Freeman, E. H. Adelson and Y. Weiss. IEEE Conference on Computer Vision and Pattern Recognition, 2008. [13] Amano, K., Edwards, M., Badcock, D. R. and Nishida, S. Adaptive pooling of visual motion signals by the human visual system revealed with a novel multi-element stimulus. Journal of Vision, 9(3), 4, 1-25, 2009. 9
3948 |@word trial:12 judgement:1 norm:22 proportion:2 seek:1 solid:2 harder:1 liu:1 series:1 comparing:2 discretization:2 surprising:1 must:2 readily:1 tilted:1 visible:1 underly:1 happen:1 blur:1 shape:1 enables:1 analytic:1 predetermined:1 designed:1 plot:11 update:1 discrimination:2 stationary:1 selected:1 nq:8 accordingly:1 plane:1 beginning:1 detecting:1 math:2 preference:2 five:4 mathematical:2 h4:1 differential:2 become:1 consists:1 combine:3 fitting:2 inside:1 introduce:1 manner:2 inter:1 themselves:1 frequently:1 multi:1 discretized:2 freeman:1 resolve:1 little:1 window:3 provided:6 discover:2 estimating:1 panel:3 what:1 monkey:1 finding:2 temporal:1 quantitative:1 xd:2 exactly:1 grant:1 appear:2 positive:2 before:2 understood:1 local:14 tends:2 consequence:1 interpolation:1 approximately:1 ap:2 black:1 twice:1 suggests:1 sekuler:1 range:4 bi:6 obeys:1 acknowledgment:1 camera:1 practice:1 block:5 illusion:1 procedure:1 idk:3 area:1 physiology:1 significantly:4 thought:1 suggest:2 cannot:2 convenience:1 selection:1 close:3 impossible:1 influence:1 intercept:3 pashler:1 demonstrated:2 roth:1 convex:1 focused:1 welch:1 perceive:4 laplace:11 grzywacz:2 imagine:1 suppose:1 us:3 agreement:3 element:12 velocity:3 recognition:1 walking:2 steven:1 solved:1 capture:1 region:5 cycle:1 went:1 movement:1 e8:1 rq:13 environment:5 dynamic:1 segment:1 predictive:1 yuille:5 division:1 easily:1 various:1 separated:1 distinct:1 fast:1 cooperativity:1 psychophysicist:1 exhaustive:1 whose:1 widely:1 solve:3 otherwise:1 statistic:4 highlighted:2 noisy:1 sequence:1 moved:1 validate:2 r1:4 disappeared:1 object:1 derive:1 illustrate:1 stat:1 measured:2 grating:4 edward:1 implemented:1 predicted:3 strong:1 indicate:6 quantify:2 direction:14 plaid:1 human:64 enable:1 fix:1 probable:1 around:1 blake:1 normal:1 predict:5 consecutive:2 perceived:1 estimation:4 integrates:1 sensitive:2 agrees:1 weighted:1 clearly:2 gaussian:14 always:1 aim:1 e7:2 rather:1 avoid:1 varying:1 encode:1 validated:1 fence:1 vk:43 consistently:2 indicates:2 likelihood:4 lasted:1 contrast:1 stopping:1 entire:2 selective:1 e11:1 upward:1 overall:1 flexible:1 orientation:2 exponent:3 spatial:5 integration:7 psychophysics:1 fairly:1 smoothing:2 field:9 initialize:1 familiarize:1 identical:5 adelson:2 foreground:1 hypothesizing:1 mimic:1 report:1 stimulus:25 alter:1 inherent:1 employ:1 tangential:1 minimized:1 oriented:1 randomly:2 composed:1 preserve:1 individual:4 ourselves:1 occlusion:1 detection:1 stocker:2 accurate:1 encourage:1 partial:2 rotating:1 fitted:1 modeling:1 newsome:1 subtending:1 euler:4 conducted:2 reported:2 combined:1 person:1 international:2 lee:1 off:1 ambiguity:3 e9:1 worse:2 derivative:1 account:1 sinusoidal:1 sec:1 satisfy:1 caused:1 performed:1 view:1 observer:18 root:1 sine:1 apparently:1 observing:2 red:4 wave:1 analyze:1 participant:3 hongjing:2 slope:1 contribution:1 square:2 accuracy:7 largely:1 percept:1 characteristic:1 judgment:7 subthreshold:1 bayesian:1 lu:2 researcher:1 cc:6 explain:1 reach:1 against:1 energy:6 frequency:1 pp:4 e2:2 rational:1 stop:1 recall:1 knowledge:1 car:1 segmentation:1 higher:2 response:5 wei:2 done:1 box:4 strongly:1 furthermore:1 angular:9 stage:2 until:1 traveling:1 correlation:14 interfere:1 effect:2 consisted:1 true:2 regularization:11 hence:2 assigned:4 spatially:1 illustrated:1 adjacent:1 self:4 criterion:1 m:5 ay:1 motion:115 l1:39 percent:1 ranging:1 image:3 novel:1 common:1 functional:12 mt:2 rust:1 volume:1 tail:3 interpretation:2 schrater:1 measurement:12 significant:1 gibbs:1 imposing:1 smoothness:21 session:28 inclusion:1 reliability:2 moving:5 cortex:1 showed:4 perspective:1 inf:1 termed:2 slowness:12 certain:1 minimum:1 greater:1 impose:1 employed:1 determine:2 maximize:1 dashed:2 signal:6 ii:2 full:1 simoncelli:5 multiple:3 infer:4 reduces:1 bcs:1 alan:2 smooth:5 pde:1 lin:1 e1:2 laplacian:1 controlled:1 prediction:11 basic:1 regression:4 ae:6 vision:4 expectation:2 sometimes:1 achieved:1 cell:1 addition:1 background:4 want:1 whereas:1 interval:1 walker:5 median:2 publisher:1 biased:1 exhibited:2 sure:1 subject:14 pooling:1 flow:4 oq:4 mw:4 ideal:1 revealed:1 iii:1 variety:1 fit:12 psychology:1 identified:1 whether:1 six:1 heavier:2 passed:1 movshon:1 e3:2 york:1 adequate:1 clear:1 detailed:1 transforms:1 ten:3 simplest:1 nsf:1 punch:2 r12:2 nishida:1 estimated:6 neuroscience:5 r22:2 blue:5 discrete:1 statistics2:1 v1:6 run:5 you:3 coherence:17 comparable:1 display:3 correspondence:1 mante:1 yielded:1 strength:1 occur:2 precisely:2 constraint:3 your:1 scene:2 ucla:6 u1:1 speed:5 fourier:1 optimality:1 optical:1 department:1 watamaniuk:1 across:6 hl:1 outlier:2 restricted:1 projecting:1 equation:7 remains:2 previously:1 mechanism:1 letting:1 apply:1 observe:2 v2:4 uq:6 robustness:1 drifting:3 existence:1 denotes:2 remaining:1 include:2 completed:1 calculating:1 psychophysical:3 noticed:1 coherently:1 parametric:1 primary:1 striate:1 unclear:2 div:1 gradient:1 distance:1 separate:1 trivial:1 modeled:1 illustration:3 ratio:17 setup:2 negative:2 implementation:2 design:1 perform:2 discretize:1 neuron:2 observation:1 finite:1 descent:1 extended:2 frame:2 varied:2 inferred:2 specified:2 coherent:3 established:1 discontinuity:3 macaque:1 able:4 below:2 perception:10 pattern:12 pioneering:1 green:4 power:3 psychology1:1 badcock:1 natural:6 predicting:1 prior:45 review:1 l2:29 literature:1 sg:2 relative:3 validation:2 h2:1 integrate:3 degree:9 consistent:3 imposes:1 thresholding:1 editor:2 heavy:1 supported:1 pdes:2 bias:1 side:1 allow:1 fall:1 wide:1 absolute:1 overcome:1 boundary:1 cortical:1 made:1 adaptive:1 far:1 approximate:1 aperture:6 deg:2 global:14 handbook:1 search:1 iterative:1 table:2 nature:6 robust:1 e5:1 excellent:1 domain:2 did:3 amano:1 main:1 arrow:1 noise:5 arise:1 edition:1 knill:1 referred:1 psychometric:1 representative:2 screen:1 slow:3 wiley:1 position:2 explicit:1 third:2 e4:1 specific:2 showing:1 yantis:1 r2:4 physiological:1 evidence:2 importance:1 ci:12 magnitude:1 illustrates:1 hole:3 nk:1 visual:19 lagrange:4 u2:1 speculated:1 goal:4 towards:3 experimentally:1 change:1 included:2 specifically:2 infinite:1 determined:1 except:1 perceiving:1 called:2 pas:1 experimental:14 e10:1 indicating:3 e6:1 support:3 phenomenon:1 correlated:2
3,255
3,949
Inference with Multivariate Heavy-Tails in Linear Models Danny Bickson and Carlos Guestrin Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 {bickson,guestrin}@cs.cmu.edu Abstract Heavy-tailed distributions naturally occur in many real life problems. Unfortunately, it is typically not possible to compute inference in closed-form in graphical models which involve such heavy-tailed distributions. In this work, we propose a novel simple linear graphical model for independent latent random variables, called linear characteristic model (LCM), defined in the characteristic function domain. Using stable distributions, a heavy-tailed family of distributions which is a generalization of Cauchy, L?evy and Gaussian distributions, we show for the first time, how to compute both exact and approximate inference in such a linear multivariate graphical model. LCMs are not limited to stable distributions, in fact LCMs are always defined for any random variables (discrete, continuous or a mixture of both). We provide a realistic problem from the field of computer networks to demonstrate the applicability of our construction. Other potential application is iterative decoding of linear channels with non-Gaussian noise. 1 Introduction Heavy-tailed distributions naturally occur in many real life phenomena, for example in computer networks [23, 14, 16]. Typically, a small set of machines are responsible for a large fraction of the consumed network bandwidth. Equivalently, a small set of users generate a large fraction of the network traffic. Another common property of communication networks is that network traffic tends to be linear [8, 23]. Linearity is explained by the fact that the total incoming traffic at a node is composed from the sum of distinct incoming flows. Recently, several works propose to use linear multivariate statistical methods for monitoring network health, performance analysis or intrusion detection [15, 13, 16, 14]. Some of the aspects of network traffic makes the task of modeling it using a probabilistic graphical models challenging. In many cases, the underlying heavy-tailed distributions are difficult to work with analytically. That is why existing solutions in the area of network monitoring involve various approximations of the joint probability distribution function using a variety of techniques: mixtures of distributions [8], spectral decomposition [13] historgrams [14], sketches [16], entropy [14], sampled moments [23], etc. In the current work, we propose a novel linear probabilistic graphical model called linear characteristic model (LCM) to model linear interactions of independent heavy-tailed random variables (Section 3). Using the stable family of distributions (defined in Section 2), a family of heavy-tailed distributions, we show how to compute both exact and approximate inference (Section 4). Using real data from the domain of computer networks we demonstrate the applicability of our proposed methods for computing inference in LCM (Section 5). We summarize our contributions below: 1 ? We propose a new linear graphical model called LCM, defined as a product of factors in the cf domain. We show that our model is well defined for any collection of random variables, since any random variable has a matching cf. ? Computing inference in closed form in linear models involving continuous variables is typically limited to the well understood cases of Gaussians and simple regression problems in exponential families. In this work, we extend the applicability of belief propagation to the stable family of distributions, a generalization of Gaussian, Cauchy and L?evy distributions. We analyze both exact and approximate inference algorithms, including convergence and accuracy of the solution. ? We demonstrate the applicability of our proposed method, performing inference in real settings, using network tomography data obtained from the PlanetLab network. 1.1 Related work There are three main relevant works in the machine learning domain which are related to the current work: Convolutional Factor Graphs (CFG), Copulas and Independent Component Analysis (ICA). Below we shortly review them and motivate why a new graphical model is needed. Convolutional Factor Graphs (CFG) [18, 19] are a graphical model for representing linear relation of independent latent random variables. CFG assume that the probability distribution factorizes as a convolution of potentials, and proposes to use duality to derive a product factorization in the characteristic function (cf) domain. In this work we extend CFG by defining the graphical model as a product of factors in the cf domain. Unlike CFGs, LCMs are always defined, for any probability distribution, while CFG may are not defined when the inverse Fourier transform does not exist. A closely related technique is the Copula method [22, 17]. Similar to our work, Copulas assume a linear underlying model. The main difference is that Copulas transform each marginal variable into a uniform distribution and perform inference in the cumulative distribution function (cdf) domain. In contrast, we perform inference in the cf domain. In our case of interest, when the underlying distributions are stable, Copulas can not be used since stable distributions are not analytically expressible in the cdf domain. A third related technique is ICA (independent component analysis) on linear models [27]. Assuming a linear model Y = AX 1 , where the observations Y are given, the task is to estimate the linear relation matrix A, using only the fact that the latent variables X are statistically mutually independent. Both techniques (LCM and ICA) are complementary, since ICA can be used to learn the linear model, while LCM is used for computing inference in the learned model. 2 Stable distribution Stable distribution [30] is a family of heavy-tailed distributions, where Cauchy, L?evy and Gaussian are special instances of this family (see Figure 1). Stable distributions are used in different problem domains, including economics, physics, geology and astronomy [24]. Stable distribution are useful since they can model heavy-tailed distributions that naturally occur in practice. As we will soon show with our networking example, network flows exhibit empirical distribution which can be modeled remarkably well by stable distributions. We denote a stable distribution by a tuple of four parameters: S(?, ?, ?, ?). We call ? as the characteristic exponent, ? is the skew parameter, ? is a scale parameter and ? is a shift parameter. For example (Fig. 1), a Gaussian N (?, ? 2 ) is a stable distribution with the parameters S(2, 0, ??2 , ?), a Cauchy distribution Cauchy(?, ?) is stable with S(1, 0, ?, ?) and a L?evy distribution L?evy(?, ?) is stable with S( 12 , 1, ?, ?). Following we define formally a stable distribution. We begin by defining a unit scale, zero-centered stable random variable. Definition 2.1. [25, Def. 1.6] A random variable X is stable if and only if X ? aZ + b, 0 < ? ? 2, ?1 ? ? ? 1, a, b ? R, a 6= 0 and Z is a random variable with characteristic function2 E[exp(iuZ)] = 1 2 (  exp ? |u|? [1 ? i? tan( ?? ) sign(u)] ? 6= 1 2  . 2 ?=1 exp ? |u|[1 + i? ? sign(u) log(|u|)] Linear model is formally defined in Section 3. We formally define characteristic function in the supplementary material. 2 (1) Next we define a general stable random variable. Definition 2.2. [25, Def. 1.7] A random variable X is S(?, ?, ?, ?) if X? ( ?(Z ? ? tan( ?? )) + ? 2 ?Z + ? ? 6= 1 , ?=1 where Z is given by (1). X has characteristic function E exp(iuZ) = ( exp(?? ? |u|? [1 ? i? tan( ?? ) sign(u)(|?u|1?? ? 1)] + i?u) 2 2 exp(??|u|[1 + i? ? sign(u) log(?|u|)] + i?u) ? 6= 1 . ?=1 A basic property of stable laws is that weighted sums of ?-stable random variables is ?-stable (and hence the family is called stable). This property will be useful in the next section where we compute inference in a linear graphical model with underlying stable distributions. The following proposition formulates this linearity. Proposition 2.1. [25, Prop. 1.16] a) Multiplication by a scalar. If X ? S(?, ?, ?, ?) then for any a, b ? R, a 6= 0, aX + b ? S(?, sign(a)?, |a|?, a? + b) . b) Summation of two stable variables. If X1 ? S(?, ?1 , ?1 , ?1 ) and X2 ? S(?, ?2 , ?2 , ?2 ) are independent, then X1 + X2 ? S(?, ?, ?, ?) where ?1 ?1? + ?2 ?2? ?= , ? ? = ?1? + ?2? , ? = ?1 + ?2 + ? , ?1? + ?2?  ? 6= 1 tan( ?? 2 )[?? ? ?1 ?1 ? ?2 ?2 ] . ?= 2 [?? log ? ? ? ? log ? ? ? ? log ? ] ?=1 1 1 1 2 2 2 ? Note that both X1 , X2 have to be distributed with the same characteristic exponent ?. 3 Linear characteristic models A drawback of general stable distributions, is that they do not have closed-form equation for the pdf or the cdf. This fact makes the handling of stable distributions more difficult. This is probably one of the reasons stable distribution are rarely used in the probabilistic graphical models community. We propose a novel approach for modeling linear interactions between random variables distributed according to stable distributions, using a new linear probabilistic graphical model called LCM. A new graphical model is needed, since previous approaches like CFG or the Copula method can not be used for computing inference in closed-form in linear models involving stable distribution, because they require computation in the pdf or cdf domains respectively. We start by defining a linear model: Definition 3.1. (Linear model) Let X1 , ? ? ? , Xn a set of mutually independent random variables.3 Let Y1 , ? ? ? , Ym be a set of observations obtained using the linear model: Yi ? X Aij Xj j ?i , where Aij ? R are weighting scalars. We denote the linear model in matrix notation as Y = AX. Linear models are useful in many domains. For example, in linear channel decoding, X are the transmitted codewords, the matrix A is the linear channel transformation and Y is a vector of observations. When X are distributed using a Gaussian distribution, the channel model is called AWGN (additive white Gaussian noise) channel. Typically, the decoding task is finding the most probable X, given A and the observation Y. Despite the fact that X are assumed statistically mutually independent when transmitting, given an observation Y , X are not independent any more, since they are correlated via the observation. Besides of the network application we focus on, other potential application to our current work is linear channel decoding with stable, non-Gaussian, noise. In the rest of this section we develop the foundations for computing inference in a linear model using underlying stable distributions. Because stable distributions do not have closed-form equations in the pdf domain, we must work in the cf domain. Hence, we define a dual linear model in the cf domain. 3 We do not limit the type of random variables. The variables may be discrete, continuous, or a mixture of both. 3 3.1 Duality of LCM and CFG CFG [19] have shown that the joint probability p(x, y) of any linear model can be factorized as a convolution: p(x, y) = p(x1 , ? ? ? , xn , y1 , ? ? ? , ym ) = ? Y i p(xi , y1 , ? ? ? , ym ) . (2) Informally, LCM is the dual representation of (2) in the characteristic function domain. Next, we define LCM formally, and establish the duality to the factorization given in (2). Definition 3.2. (LCM) Given the linear model Y=AX, we define the linear characteristic model (LCM) Y ?(t1 , ? ? ? , tn , s1 , ? ? ? , sm ) , i ?(ti , s1 , ? ? ? , sm ) , where ?(ti , s1 , ? ? ? , sm ) is the characteristic function4 of the joint distribution p(xi , y1 , ? ? ? , ym ). The following two theorems establish duality between the LCM and its dual representation in the pdf domain. This duality is well known (see for example [18, 19]), but important for explaining the derivation of LCM from the linear model. Theorem 3.3. Given a LCM, assuming p(x, y) as defined in (2) has a closed form and the Fourier transform F [p(x, y)] exists, then the F [p(x, y)] = ?(t1 , ? ? ? , tn , s1 , ? ? ? , sm ). Theorem 3.4. Given a LCM, when the inverse Fourier F ?1 (?(t1 , ? ? ? , tn , s1 , ? ? ? , sm )) = p(x, y) as defined in (2). transform exists, then The proof of all theorem is deferred to the supplementary material. Whenever the inverse Fourier transform exists, LCM model has a dual CFG model. In contrast to the CFG model, LCM are always defined, even the inverse Fourier transform does not exist. The duality is useful, since it allows us to compute inference in either representations, whenever it is more convenient. 4 Main result: exact and approximate inference in LCM This section brings our main result. Typically, exact inference in linear models with continuous variables is limited to the well understood cases of Gaussian and simple regression problem in exponential families. In this section we extend previous results, to show how to compute inference (both exact and approximate) in linear model with underlying stable distributions. 4.1 Exact inference in LCM The inference task typically involves computation of marginal distribution or a conditional distribution of a probability function. For the rest of the discussion we focus on marginal distribution. Marginal distribution of the node xi is typically computed by integrating out all other nodes: p(xi |y) ? Z p(x, y) dX\i , X\i where X \ i is the set of all nodes excluding node i. Unfortunately, when working with stable distribution, the above integral is intractable. Instead, we propose to use a dual operation called slicing, computed in the cf domain. Definition 4.1. (slicing/evaluation)[28, p. 110] (a) Joint cf. Given random variables X1 , X2 , the joint cf is ?X1 ,X2 (t1 , t2 ) = E[eit1 x1 +it2 x2 ]. (b) Marginal cf. The marginal cf is derived from the joint cf by ?X1 (t1 ) = ?X1 ,X2 (t1 , 0). This operation  is called slicing or evaluation. We denote the slicing operation as ?X1 (t1 ) = ?X1 ,X2 (t1 , t2 ) . t2 =0 The following theorem establishes the fact that marginal distribution can be computed in the cf domain, by using the slicing operation. 4 Defined in the supplementary material. 4 0.018 for i ? |T | { Eliminate ti by computing  Y ?m+i (N (ti )) = ?(tj , s1 , ? ? ? , sm ) 0.014 0.012 ti =0 ?j ?N (ti ) Cauchy Gaussian Levy 0.016 0.01 0.008 Remove ?(tj , s1 , ? ? ? , sm ) and ti from LCM. Add ?m+i to LCM. } Finally: If F ?1 exists, compute p(xi ) = F ?1 0.006 0.004 0.002 (?f inal ) . 0 -3 Algorithm 1: Exact inference in LCM using LCM-Elimination. -2 -1 0 1 2 3 4 Figure 1: The three special cases of stable distribution where closed-form pdf exists. Theorem 4.2. Given a LCM, the marginal cf of the random variable Xi can be computed using ?(ti ) = Y j ?(tj , s1 , ? ? ? , sm )  , (3) T \i=0 In case the inverse Fourier transform exists, then the marginal probability of the hidden variable Xi is given by p(xi ) ? F ?1 {?(ti )} . Based on the results of Thm. 4.2 we propose an exact inference algorithm, LCM-Elimination, for computing the marginal cf (shown in Algorithm 1). We use the notation N (k) as the set of graph neighbors of node k, excluding k 5 . T is the set {t1 , ? ? ? , tn }. LCM-Elimination is dual to CFG-Elimination algorithm [19]. LCM-Elimination operates in the cf domain, by evaluating one variable at a time, and updating the remaining graphical model accordingly. The order of elimination does not affect correctness (although it may affect efficiency). Once the marginal cf ?(ti ), is computed, assuming the inverse Fourier transform exists, we can compute the desired marginal probability p(xi ). 4.2 Exact inference in stable distributions After defining LCM and showing that inference can be computed in the cf domain, we are finally ready to show how to compute exact inference in a linear model with underlying stable distributions. We assume that all observation nodes Yi are distributed according to a stable distribution. From the linearity property of stable distribution, it is clear that the hidden variables Xi are distributed according to a stable distribution as well. The following theorem is one of the the novel contributions of this work, since as far as we know, no closed-form solution was previously derived. Theorem 4.3. Given a LCM, Y = AX +Z, with n i.i.d. hidden variables Xi ? S(?, ?xi , ?xi , ?xi ), n i.i.d. noise variables with known parameters Zi ? S(?, ?zi , ?zi , ?zi ), and n observations yi ? R, assuming the matrix An?n is invertible6 , then a) the observations Yi are distributed according to stable distribution Yi ? S(?, ?yi , ?yi , ?yi ) with the following parameters: ?y = |A|? ?x + ?z , ?y = ?y?? [(|A| sign(A))(?x ?x ) + ?z ?z ], ?y = A?x + ?y ( )[?y ?y ? A(?x ?x ) ? ?z ?z ] ? 6= 1 tan( ?? 2 , ?y = 2 [? ? log(? ) ? A log(|A|)(? ? ) ? A(? ? log(? )) ? ? ? ] ? =1 y y y x x x x x z z ? b) the result of exact inference for computing the marginals p(xi |y) ? S(?, ?xi |y , ?xi |y , ?xi |y ) is given in vector notation: ?? ? ?x|y = ?x|y [(|A|? sign(A))?1 (?y ?y? )] , ?x|y = (|A|? )?1 ?y? , ?x|y = A?1 [?y ? ?x ], (4) 5 More detailed explanation of the construction of a graphical model out of the linear relation matrix A is found on [4, Chapter 2.3]. 6 To simplify discussion we assume that the length of both the hidden and observation vectors |X| = |Y | = n. However the results can be equivalently extended to the more general case where |X| = n, |Y | = m, m 6= n. See for example [6]. 5 Initialize: mij (xj ) = 1, ?Aij 6= 0. Iterate until convergence # Y mij (tj ) = ?i (ti , s1 , ? ? ? , sm ) mki (ti ) k?N (i)\j Initialize: mij (xj ) = 1, ?Aij = 6 0. Iterate until convergence Z ? Y mij (xj ) = p(xi , y1 , ? ? ? , ym ) ? Finally: Finally: ?(ti ) = ?i (ti , s1 , ? ? ? , sm ) (a) Y mki (ti ). k?N (i) (b) p(xi ) = p(xi , y1 , ? ? ? , ym ) ? Initialize: ?xi |y , ?xi |y , ?xi |y = S(?, 0, 0, 0), ?i . Iterate until convergence: X X ? ? ? ? ? ? ?x |y = ?y ? |Aij | ?x |y , ?xi |y = ?yi ?y ? sign(Aij )|Aij | ?xj |y , i i ?xi |y = Output: mki (xi )dxi k?N (i)\j xi ti =0 j j6=i ? ? ? ?tan( ?? )[? 2 i j6=i y i ?y i ? P 1?? j Aij ?xj |y ?x ?|y ] j P ? ? 2 ?? [?yi ?yi log(?yi ) ? j:A ? xi |y ? S(?, ?xi |y /?x i |y 1?? Aij log(|Aij |)?xj |y ?x ?|y ij 6=0 j (c) , ?xi |y , ?xi |y ) ? P ?xi |y = ?yi ? ? Y mki (xi ). k?N (i) X j6=i 1?? ? j Aij ?xj |y ?xj |y log(?xj |y Aij ?xj |y ? ?xi |y , ? 6= 1 )] ?=1 Algorithm 2: Approximate inference in LCM using the (a) Characteristic-Sum-Product (CSP) algorithm (b) Integral Convolution (IC) algorithm. Both are exact on tree topologies. (c) Stable-Jacobi algorithm. ( tan( ?? )[?y ?y ? A(?x|y ?x|y )] ? 6= 1 2 , ?x|y = 2 [?y ?y log(?y ) ?(A log(|A|)(?x|y ?x|y ) ? A(?x|y ?x|y log(?x|y ))] ? = 1 ? (5) where is the entrywise product (of both vectors and matrices),|A| is the absolute value (entrywise) log(A), A? , sign(A) are entrywise matrix operations and ?x , [?x1 , ? ? ? , ?xn ]T and the same for ?y , ? z , ? x , ? y , ? z , ? x , ? y , ? z . 4.3 Approximate Inference in LCM Typically, the cost of exact inference may be expensive. For example, in the related linear model of a multivariate Gaussian (a special case of stable distribution), LCM-Elimination reduces to Gaussian elimination type algorithm with a cost of O(n3 ), where n is the number of variables. Approximate methods for inference like belief propagation [26], usually require less work than exact inference, but may not always converge (or convergence to an unsatisfactory solution). The cost of exact inference motivates us to devise a more efficient approximations. We propose two novel algorithms that are variants of belief propagation for computing approximate inference in LCM. The first, Characteristic-Slice-Product (CSP) is defined in LCM (shown in Algorithm 2(a)). The second, Integral-Convolution (IC) algorithm (Algorithm 2(b)) is its dual in CFG. As in belief propagation, our algorithms are exact on tree graphical models. The following theorem establishes this fact. Theorem 4.4. Given an LCM with underlying tree topology (the matrix A is an irreducible adjacency matrix of a tree graph), the CSP and IC algorithms, compute exact inference, resulting in the marginal cf and the marginal distribution respectively. The basic property which allows us to devise the CSP algorithm is that LCM is defined as a product of factor in the cf domain. Typically, belief propagation algorithms are applied to a probability distribution which factors as a product of potentials in the pdf domain. The sum-product algorithm uses the distributivity of the integral and product operation to devise efficient recursive evaluation of the marginal probability. Equivalently, the Characteristic-Slice-Product algorithm uses the distributivity of the slicing and product operations to perform efficient inference to compute the marginal cf in the cf domain, as shown in Theorem 4.4. In a similar way, the Integral-Convolution algorithm uses distributivity of the integral and convolution operations to perform efficient inference in the pdf domain. Note that the original CFG work [18, 19] did not consider approximate inference. Hence our proposed approximate inference algorithm further extends the CFG model. 4.4 Approximate inference for stable distributions For the case of stable distributions, we derive an approximation algorithm, Stable-Jacobi (Algorithm 2(c)), out of the CSP update rules. The algorithm is derived by substituting the convolution and multiplication by scalar operations (Prop. 2.1 b,a) into the update rules of the CSP algorithm given in Algorithm 2(a). 6 (6) 10 Percentage of Bandwidth L2 Change vs. previous iter Empirical Data Levi fit (5.4648e-04, 9.99e-01) 0.6 0.5 0.4 0.3 0.2 0.1 0 0 5 10 15 20 25 30 35 Source Port 40 10 10 10 10 10 (a) (b) 2 ? ? ? 0 -2 -4 -6 -8 0 5 10 15 iteration 20 25 (c) Pajek Figure 2: (a) Distribution of network flows on a typical PlanetLab host is fitted quite well with a Levy distribution. (b) The core of the PlanetLab network. 1% of the flows consists of 19% of the total bandwidth. (c) Convergence of Stable-Jacobi. Like belief propagation, our approximate algorithm Stable-Jacobi is not guaranteed to converge on general graphs containing cycles. We have analyzed the evolution dynamics of the update equations for Stable-Jacobi and derived sufficient conditions for convergence. Furthermore, we have analyzed the accuracy of the approximation. Not surprisingly, the sufficient condition for convergence relates to the properties of the linear transformation matrix A. The following theorem is one of the main novel contributions of this work. It provides both sufficient condition for convergence of StableJacobi as well as closed-form equations for the fixed point. Theorem 4.5. Given a LCM with n i.i.d hidden variables Xi , n observations Yi distributed according to stable distribution Yi ? S(?, ?yi , ?yi , ?yi ), assuming the linear relation matrix An?n is invertible and normalized to a unit diagonal7 , Stable-Jacobi (as given in Algorithm 2(c)) converges to a unique fixed point under both the following sufficient conditions for convergence (both should hold): ?(|R|? ) < 1 , (1) (2) ?(R) < 1 . where ?(R) is the spectral radius (the largest absolute value of the eigenvalues of R), R , I ? A, |R| is the entrywise absolute value and |R|? is the entrywise exponentiation. Furthermore, the unique fixed points of convergence are given by equations (4)-(5). The algorithm converges to the exact marginals for the linear-stable channel.8 5 Application: Network flow monitoring In this section we propose a novel application for inference in LCMs to model network traffic flows of a large operational worldwide testbed. Additional experimental results using synthetic examples are found in the supplementary material. Network monitoring is an important problem in monitoring and anomaly detection of communication networks [15, 16, 8]. We obtained Netflow PlanetLab network data [10] collected on 25 January 2010. The PlanetLab network [1] is a distributed networking testbed with around 1000 server nodes scattered in about 500 sites around the world. We define a network flow as a directed edge between a transmitting and receiving hosts. The number of packets transmitted in this flow is the scalar edge weight. We propose to use LCMs for modeling distribution of network flows. Figure 2(a) plots a distribution of flows, sorted by their bandwidth, on a typical PlanetLab node. Empirically, we found out that network flow distribution in a single PlanetLab node are fitted quite well using L?evy distribution a stable distribution with ? = 0.5, ? = 1. The empirical means are mean(?) ? 1e?4 , mean(?) ? 1. For performing the fitting, we use Mark Veillette?s Matlab stable distribution package [31]. Using previously proposed techniques utilizing histograms [16] for tracking flow distribution in Figure 2(a), we would need to store 40 values (percentage of bandwidth for each source port). In contrast, by approximating network flow distribution with stable distributions, we need only 4 7 When the matrix A is positive definite it is always possible to normalize it to a unit diagonal. The nor1 1 malized matrix is D? 2 AD? 2 where D = diag(A). Normalizing to a unit diagonal is done to simplify convergence analysis (as done for example in [12]) but does not limit the generality of the proposed method. 8 Note that there is an interesting relation to the walk-summability convergence condition [12] of belief propagation in the Gaussian case: ?(|R|) < 1. However, our results are more general since they apply for any characteristic exponent 0 < ? ? 2 and not just for ? = 2 as in the Gaussian case. 7 parameters (?, ?, ?, ?)! Thus we dramatically reduce storage requirements. Furthermore, using the developed theory in previous sections, we are able to linearly aggregate distribution of flows in clusters of nodes. We extracted a connected component of traffic flows connecting the core network 652 nodes. We fitted a stable distribution characterizing flow behavior for each machine. A partition of 376 machines as the observed flows Yi (where flow distribution is known). The task is to predict the distribution of the unobserved remaining 376 flows Xi , based on the observed traffic flows (entries of Aij ). We run approximate inference using Stable-Jacobi and compared the results to the exact result computed by LCM-Elimination. We emphasize again, that using related techniques (Copula method , CFG, and ICA) it is not possible to compute exact inference for the problem at hand. In the supplementary material, we provide a detailed comparison of two previous approximation algorithms: non-parametric BP (NBP) and expectation propagation (EP). Figure 2(c) plots convergence of the three parameters ?, ?, ? as a function of iteration number of the Stable-Jacobi algorithm. Note that convergence speed is geometric. (?(R) = 0.02 << 1). Regarding computation overhead, LCM-Exact algorithm requires 4 ? 3763 operations, while StableJacobi converged to an accuracy of 1e?5 in only 4 ? 3762 ? 25 operations. Additional benefit of the Stable-Jacobi is that it is a distributed algorithm, naturally suitable for communication networks. Source code of some of the algorithms presented here can be found on [3]. 6 Conclusion and future work We have presented a novel linear graphical model called LCM, defined in the cf domain. We have shown for the first time how to perform exact and approximate inference in a linear multivariate graphical model when the underlying distributions are stable. We have discussed an application of our construction for computing inference of network flows. We have proposed to borrow ideas from belief propagation, for computing efficient inference, based on the distributivity property of the slice-product operations and the integral-convolution operations. We believe that other problem domains may benefit from this construction, and plan to pursue this as a future work. We believe there are several exciting directions for extending this work. Other families of distributions like geometric stable distributions or Wishart can be analyzed in our model. The Fourier transform can be replaced with more general kernel transform, creating richer models. Acknowledgement D. Bickson would like to thank Andrea Pagnani (ISI) for inspiring the direction of this research, to John P. Nolan (American University) , for sharing parts of his excellent book about stable distribution online, Mark Veillette (Boston University) for sharing his stable distribution code online, to Jason K. Johnson (LANL) for assisting in the convergence analysis and to Sapan Bathia and Marc E. Fiuczynski (Princeton University) for providing the PlanetFlow data. This research was supported by ARO MURI W911NF0710287, ARO MURI W911NF0810242, NSF Mundo IIS-0803333 and NSF Nets-NBD CNS-0721591. References [1] PlanetLab Network Homepage http://www.planet-lab.org/. [2] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Calculation. Numerical Methods. Prentice Hall, 1989. [3] D. Bickson. Linear characteristic graphical models Matlab toolbox. Carnegie Mellon university. Available on http://www.cs.cmu.edu/?bickson/stable/. [4] D. Bickson. Gaussian Belief Propagation: Theory and Application. PhD thesis, The Hebrew University of Jerusalem, 2008. [5] D. Bickson, D. Baron, A. T. Ihler, H. Avissar, and D. Dolev. Fault identification via non-parametric belief propagation. IEEE Tran. on Signal Processing, to appear, 2010. [6] D. Bickson, O. Shental, P. H. Siegel, J. K. Wolf, and D. Dolev. Gaussian belief propagation based multiuser detection. In IEEE Int. Symp. on Inform. Theory (ISIT), Toronto, Canada, July 2008. [7] M. Briers, A. Doucet, and S. S. Singh. Sequential auxiliary particle belief propagation. In International Conference on Information Fusion, pages 705?711, 2005. 8 [8] A. Chen, J. Cao, and T. Bu. Network tomography: Identifiability and fourier domain estimation. In INFOCOM 2007. 26th IEEE International Conference on Computer Communications. IEEE, pages 1875? 1883, May 2007. [9] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1990. [10] M. Huang, A. Bavier, and L. Peterson. Planetflow: maintaining accountability for network services. SIGOPS Oper. Syst. Rev., (1):89?94, 2006. [11] A. T. Ihler, E. Sudderth, W. Freeman, and A. Willsky. Efficient multiscale sampling from products of Gaussian mixtures. In Neural Information Processing Systems (NIPS), Dec. 2003. [12] J. Johnson, D. Malioutov, and A. Willsky. Walk-Sum Interpretation and Analysis of Gaussian Belief Propagation. In Advances in Neural Information Processing Systems 18, pages 579?586, 2006. [13] A. Lakhina, M. Crovella, and C. Diot. Diagnosing network-wide traffic anomalies. In SIGCOMM ?04: Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications, number 4, pages 219?230, New York, NY, USA, October 2004. [14] A. Lakhina, M. Crovella, and C. Diot. Mining anomalies using traffic feature distributions. In SIGCOMM ?05: Proceedings of the 2005 conference on Applications, technologies, architectures, and protocols for computer communications, pages 217?228, New York, NY, USA, 2005. ACM. [15] A. Lakhina, K. Papagiannaki, M. Crovella, C. Diot, E. D. Kolaczyk, and N. Taft. Structural analysis of network traffic flows. In SIGMETRICS ?04/Performance ?04: Proceedings of the joint international conference on Measurement and modeling of computer systems, number 1, pages 61?72, New York, NY, USA, June 2004. [16] X. Li, F. Bian, M. Crovella, C. Diot, R. Govindan, G. Iannaccone, and A. Lakhina. Detection and identification of network anomalies using sketch subspaces. In IMC ?06: Proceedings of the 6th ACM SIGCOMM conference on Internet measurement, pages 147?152, New York, NY, USA, 2006. ACM. [17] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. In Journal of Machine Learning Research, to appear., 2009. [18] Y. Mao and F. R. Kschischang. On factor graphs and the Fourier transform. In IEEE Trans. Inform. Theory, volume 51, pages 1635?1649, August 2005. [19] Y. Mao, F. R. Kschischang, and B. J. Frey. Convolutional factor graphs as probabilistic models. In UAI ?04: Proceedings of the 20th conference on Uncertainty in artificial intelligence, pages 374?381, Arlington, Virginia, United States, 2004. AUAI Press. [20] R. J. Marks II. Handbook of Fourier Analysis and Its Applications. Oxford University Press, 2009. [21] T. P. Minka. Expectation propagation for approximate Bayesian inference. In UAI ?01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages 362?369, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc. [22] R. B. Nelsen. An Introduction to Copulas. Springer Serias in Statistics, second edition, 2006. [23] H. X. Nguyen and P. Thiran. Network loss inference with second order statistics of end-to-end flows. In IMC ?07: Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, pages 227?240, New York, NY, USA, 2007. ACM. [24] J. P. Nolan. Bibliography on stable distributions, processes and related topics. Technical report, 2010. [25] J. P. Nolan. Stable Distributions - Models for Heavy Tailed Data. Birkh?auser, Boston, 2010. In progress, Chapter 1 online at http://academic2.american.edu/?jpnolan. [26] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, 1988. [27] H. Shen, S. Jegelka, and A. Gretton. Fast kernel-based independent component analysis. Signal Processing, IEEE Transactions on, 57(9):3498?3511, May 2009. [28] T. T. Soong. Fundamentals of Probability and Statistics for Engineers. Wiley, 2004. [29] E. Sudderth, A. T. Ihler, W. Freeman, and A. Willsky. Nonparametric belief propagation. In Conference on Computer Vision and Pattern Recognition (CVPR), June 2003. [30] V. V. Uchaikin and V. M. Zolotarev. Chance and stability. In Stable Distributions and their Applications. Utrecht, VSP, 1999. [31] M. Veillette. Stable distribution Matlab package. Boston university. http://math.bu.edu/people/mveillet/. Available on [32] A. Yener, R. Yates, and S. Ulukus. CDMA multiuser detection: A nonlinear programming approach. IEEE Tran. On Communications, 50(6):1016?1024, 2002. 9
3949 |@word kolaczyk:1 decomposition:1 moment:1 liu:1 united:1 nonparanormal:1 multiuser:2 existing:1 current:3 danny:1 must:1 dx:1 malized:1 w911nf0810242:1 planet:1 additive:1 numerical:1 john:1 realistic:1 partition:1 remove:1 plot:2 bickson:8 update:3 v:1 intelligence:2 accordingly:1 core:2 provides:1 math:1 node:12 evy:6 toronto:1 org:1 diagnosing:1 lakhina:4 consists:1 fitting:1 overhead:1 symp:1 ica:5 andrea:1 isi:1 brier:1 behavior:1 pagnani:1 freeman:2 begin:1 linearity:3 underlying:9 notation:3 factorized:1 homepage:1 nbp:1 pursue:1 developed:1 astronomy:1 transformation:2 finding:1 unobserved:1 ti:16 auai:1 unit:4 appear:2 bertsekas:1 t1:9 positive:1 understood:2 frey:1 service:1 tends:1 limit:2 despite:1 awgn:1 oxford:1 accountability:1 challenging:1 limited:3 factorization:2 statistically:2 directed:1 unique:2 responsible:1 horn:1 practice:1 recursive:1 definite:1 area:1 empirical:3 matching:1 convenient:1 integrating:1 storage:1 prentice:1 function2:1 www:2 jerusalem:1 economics:1 shen:1 slicing:6 wasserman:1 rule:2 utilizing:1 borrow:1 vsp:1 his:2 it2:1 stability:1 construction:4 tan:7 user:1 exact:23 anomaly:4 programming:1 us:3 pa:1 expensive:1 recognition:1 updating:1 muri:2 observed:2 ep:1 cycle:1 connected:1 dynamic:1 motivate:1 singh:1 efficiency:1 joint:7 various:1 chapter:2 derivation:1 distinct:1 fast:1 birkh:1 artificial:2 aggregate:1 quite:2 richer:1 supplementary:5 plausible:1 cvpr:1 nolan:3 cfg:15 statistic:3 transform:11 zolotarev:1 online:3 eigenvalue:1 net:1 propose:10 aro:2 interaction:2 product:14 tran:2 relevant:1 cao:1 normalize:1 az:1 convergence:16 cluster:1 requirement:1 extending:1 nelsen:1 converges:2 derive:2 develop:1 geology:1 ij:1 progress:1 auxiliary:1 c:2 involves:1 direction:2 radius:1 closely:1 drawback:1 centered:1 packet:1 material:5 elimination:9 adjacency:1 require:2 taft:1 generalization:2 proposition:2 probable:1 mki:4 summation:1 isit:1 hold:1 around:2 hall:1 ic:3 exp:6 predict:1 substituting:1 estimation:2 largest:1 correctness:1 establishes:2 weighted:1 gaussian:18 always:5 sigmetrics:1 csp:6 factorizes:1 ax:5 focus:2 derived:4 june:2 unsatisfactory:1 intrusion:1 contrast:3 inference:48 typically:9 eliminate:1 hidden:5 relation:5 expressible:1 dual:7 exponent:3 proposes:1 plan:1 special:3 copula:8 initialize:3 marginal:16 field:1 once:1 auser:1 sampling:1 future:2 t2:3 report:1 simplify:2 intelligent:1 irreducible:1 composed:1 replaced:1 cns:1 detection:5 interest:1 mining:1 evaluation:3 deferred:1 mixture:4 analyzed:3 tj:4 tuple:1 integral:7 edge:2 crovella:4 sigops:1 tree:4 walk:2 desired:1 fitted:3 instance:1 modeling:4 cfgs:1 formulates:1 applicability:4 cost:3 entry:1 uniform:1 johnson:3 virginia:1 function4:1 synthetic:1 international:3 fundamental:1 bu:2 probabilistic:6 physic:1 receiving:1 decoding:4 invertible:1 ym:6 connecting:1 transmitting:2 again:1 thesis:1 containing:1 huang:1 yener:1 wishart:1 creating:1 american:2 book:1 pajek:1 oper:1 syst:1 li:1 potential:4 int:1 inc:1 ad:1 jason:1 closed:9 lab:1 analyze:1 traffic:10 infocom:1 start:1 carlos:1 parallel:1 identifiability:1 contribution:3 accuracy:3 convolutional:3 baron:1 characteristic:18 kaufmann:2 identification:2 bayesian:1 monitoring:5 utrecht:1 j6:3 malioutov:1 converged:1 networking:2 inform:2 whenever:2 sharing:2 definition:5 minka:1 naturally:4 proof:1 dxi:1 jacobi:9 ihler:3 sampled:1 arlington:1 bian:1 entrywise:5 done:2 generality:1 furthermore:3 just:1 until:3 sketch:2 working:1 hand:1 multiscale:1 nonlinear:1 propagation:16 brings:1 believe:2 usa:6 normalized:1 evolution:1 analytically:2 hence:3 white:1 pdf:7 demonstrate:3 tn:4 reasoning:1 novel:8 recently:1 common:1 empirically:1 volume:1 tail:1 extend:3 discussed:1 interpretation:1 marginals:2 mellon:2 measurement:3 imc:2 cambridge:1 particle:1 lcm:46 stable:69 etc:1 add:1 multivariate:5 store:1 server:1 life:2 fault:1 yi:19 devise:3 guestrin:2 transmitted:2 additional:2 morgan:2 converge:2 signal:2 assisting:1 relates:1 ii:2 worldwide:1 july:1 reduces:1 papagiannaki:1 gretton:1 technical:1 calculation:1 host:2 sigcomm:4 involving:2 regression:2 basic:2 variant:1 vision:1 cmu:2 expectation:2 iteration:2 histogram:1 kernel:2 dec:1 remarkably:1 semiparametric:1 sudderth:2 source:3 publisher:1 rest:2 unlike:1 nbd:1 probably:1 undirected:1 flow:23 lafferty:1 call:1 structural:1 variety:1 xj:11 affect:2 zi:4 iterate:3 fit:1 bandwidth:5 topology:2 architecture:2 reduce:1 regarding:1 idea:1 consumed:1 shift:1 york:5 matlab:3 dramatically:1 useful:4 clear:1 involve:2 informally:1 detailed:2 nonparametric:1 tomography:2 inspiring:1 generate:1 http:4 exist:2 percentage:2 nsf:2 sign:9 carnegie:2 discrete:2 shental:1 yates:1 iter:1 four:1 levi:1 graph:8 fraction:2 sum:5 run:1 inverse:6 exponentiation:1 package:2 uncertainty:2 extends:1 family:10 def:2 internet:2 guaranteed:1 occur:3 bp:1 x2:8 n3:1 bibliography:1 aspect:1 fourier:11 speed:1 performing:2 diot:4 department:1 according:5 rev:1 s1:10 explained:1 soong:1 handling:1 equation:5 mutually:3 previously:2 skew:1 needed:2 know:1 end:2 available:2 gaussians:1 operation:13 apply:1 spectral:2 shortly:1 original:1 remaining:2 cf:24 graphical:19 maintaining:1 cdma:1 establish:2 approximating:1 codewords:1 parametric:2 diagonal:2 exhibit:1 subspace:1 thank:1 topic:1 cauchy:6 collected:1 reason:1 willsky:3 assuming:5 besides:1 length:1 modeled:1 code:2 providing:1 hebrew:1 equivalently:3 difficult:2 unfortunately:2 october:1 motivates:1 perform:5 convolution:8 observation:11 govindan:1 sm:10 january:1 defining:4 extended:1 communication:7 excluding:2 y1:6 thm:1 august:1 community:1 canada:1 thiran:1 lanl:1 toolbox:1 learned:1 testbed:2 pearl:1 nip:1 trans:1 able:1 below:2 usually:1 pattern:1 summarize:1 including:2 explanation:1 belief:14 suitable:1 representing:1 technology:2 ready:1 health:1 review:1 geometric:2 l2:1 acknowledgement:1 multiplication:2 law:1 loss:1 summability:1 distributivity:4 interesting:1 foundation:1 jegelka:1 sufficient:4 port:2 exciting:1 heavy:11 surprisingly:1 supported:1 soon:1 aij:13 tsitsiklis:1 explaining:1 neighbor:1 characterizing:1 peterson:1 wide:1 absolute:3 distributed:10 slice:3 benefit:2 xn:3 evaluating:1 cumulative:1 world:1 collection:1 san:2 nguyen:1 far:1 transaction:1 approximate:16 emphasize:1 doucet:1 incoming:2 uai:2 handbook:1 pittsburgh:1 assumed:1 francisco:2 xi:37 continuous:4 latent:3 iterative:1 tailed:10 why:2 channel:7 learn:1 ca:1 kschischang:2 operational:1 excellent:1 domain:28 diag:1 marc:1 did:1 protocol:2 main:5 linearly:1 noise:4 edition:1 complementary:1 x1:13 fig:1 site:1 siegel:1 scattered:1 ny:5 wiley:1 mao:2 exponential:2 levy:2 third:1 weighting:1 theorem:13 showing:1 normalizing:1 fusion:1 exists:7 intractable:1 sequential:1 phd:1 chen:1 boston:3 entropy:1 tracking:1 scalar:4 springer:1 mij:4 wolf:1 chance:1 extracted:1 cdf:4 prop:2 inal:1 conditional:1 acm:5 sorted:1 change:1 typical:2 operates:1 engineer:1 called:9 total:2 duality:6 experimental:1 rarely:1 formally:4 mark:3 people:1 princeton:1 phenomenon:1 correlated:1
3,256
395
Back Propagation is Sensitive to Initial Conditions John F. Kolen Jordan B. Pollack Laboratory for Artificial Intelligence Research The Ohio State University Columbus. OH 43210. USA [email protected] [email protected] Abstract This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. We first demonstrate. through the use of Monte Carlo techniques. that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result. additional deterministic experiments were performed. The results of these experiments demon~trate the extreme sensitivity of back propagation to initial weight configuration. 1 INTRODUCTION Back Propagation (Rwnelhart et al .? 1986) is the network training method of choice for many neural network projects. and for good reason. Like other weak methods, it is simple to implement, faster than many other "general" approaches. well-tested by the field. and easy to mold (with domain knowledge encoded in the learning environment) into very specific and efficient algorithms. Rumelhart et al. made a confident statement: for many tasks. "the network: rarely gets stuck in poor local mininla that are significantly worse than the global minima. "(p. 536) According to them. initial weights of exactly 0 cannot be used. since symmetries in the environment are not sufficient to break symmetries in initial weights. Since their paper was published. the convention in the field has been to choose initial weights with a uniform distribution between plus and minus P. usually set to 0.5 or less. The convergence claim was based solely upon their empirical experience with the back propagation technique. Since then. Minsky & Papert (1988) have argued that there exists no proof of convergence for the technique. and several researchers (e.g. Judd 1988) have found that the convergence time must be related to the difficulty of the problem. otherwise an unsolved computer science question (P J NP ) would finally be answered. We do not wish to make claims about convergence of the technique in the limit (with vanishing step860 Back Propagation is Sensitive to Initial Conditions size), or the relationship between task and perfonnance, but wish to talk about a pervasive behavior of the teclmique which has gone unnoticed for several years: the sensitivity of back propagation to initial conditions. 2 THE MONTE-CARLO EXPERIMENT Initially, we perfonned empirical studies to determine the effect of learning rate, momentum rate, and the range of initial weights on t-convergence (Kolen and Goel, to appear). We use the tenn t-convergence to refer to whether or not a network, starting at a precise initial configuration, could learn to separate the input patterns according to a boolean function (correct outputs above or below 0.5) within t epochs. The experiment consisted of training a 2-2-1 network on exclusive-or while varying three independent variables in 114 combinations: learning rate, 11, equal to 1.0 or 2.0; momentum rate, a., equal to 0.0,0.5, or 0.9; and initial weight range, p, equal to 0.1 to 0.9 in 0.1 increments, and 1.0 to 10.0 in 1.0 increments. Each combination of parameters was used to initialize and train a number of networks.' Figure 1 plots the percentage of t-convergent (where t == 50, 000 epochs of 4 presentations) initial conditions for the 2-2-1 network trained on the exclusive-or problem. From the figure we thus conclude the choice of p S; 0.5 is more than a convenient symmetry-breaking default, but is quite necessary to obtain low levels of nonconvergent behavior. 90.00 L=l.OM=o.O L;;UHit-;;;o.5 1 80.00 - [;;[O~~.9 70.00 - % Non Convergence 60.00After 500050,000 . Trials 40.00 -- L=2.0M=O.9 30.00 20.00 10.00 0.00- 1 0.00 I I I I I 2.00 4.00 6.00 8.00 10.00 P Figure 1: Percentage T-Convergence vs. Initial Weight Range 3 SCENES FROM EXCLUSIVE-OR Why do networks exhibit the behavior illustrated in Figure I? While some might argue that very high initial weights (i.e. p > 10.0) lead to very long convergence times since the derivative of the semi-linear sigmoid function is effectively zero for large weights, this 1. Numbers ranged from 8 to 8355, depending on availability of computationaJ resources. Those data points calculated with small samples were usually surrounded by data points with larger samples. 861 862 Kolen and Pollack Figure 2: (Schematic Network) Figure 3: Figure 4: (-5-3+3+6Y -1-6+7X) 11=3.25 a=O.40 (+4-7+6+0-3Y+ 1X+ 1) 11=2.75 a=O.OO Figure 5: Figure 6: Figure 7: (-5+5+1-6+3XY+8+3) 11=2.75 a--o.80 (YX-3+6+8+ 3+ 1+7-3) 11=3.25 a=O.OO (Y+3-9-2+6+7-3X+7) 11=3.25 a=O.60 ~ i Figure 8: Figure 9: Figure 10: (-6-4XY -6-6+9-4-9) 11=3.00 a--o.50 (-2+1+9-1X-3+8Y -4) 11=2.75 a=O.20 (+ 1+8-3-6X-1 + 1+8Y) 11=3.50 a=O.90 Figure 11: (+7+4-9-9-5Y-3+9X) 11=3.00 a=O.70 Figure 12: Figure 13: (-9.0,-1.8) step 0.018 (-6.966.-0.500) step 0.004 Back Propagation is Sensitive to Initial Conditions does not explain tbe fact that when p is between 2.0 and 4.0, the non-t-convergence rate varies from 5 to 50 percent. Thus, we decided to utilize a more deterministic approach for eliciting the structure of initial conditions giving rise to t-convergence. Unfortunately, most networks have many weights, and thus many dimensions in initial-condition space. We can, however, examine 2-dimensional slices through the space in great detail. A slice is specified by an origin and two orthogonal directions (the X and Y axes). In the figures below, we vary the initial weigbts regularly throughout the plane formed by the axes (with the origin in the lower left-hand comer) and collect the results of rumrlng back-propagation to a particular time limit for each initial condition. The map is displayed with grey-level linearly related to time of convergence: black: meaning not t-convergent and white representing the fastest convergence time in the picture. Figure 2 is a schematic representation of the networks used in this and the following experiment. The numbers on the links and in the nodes will be used for identification purposes. Figures 3 through 11 show several interesting "slices" of the the initial condition space for 2-2-1 networks trained on exclusive-or. Each slice is compactly identified by its 9-dimensional weight vector and associated learning! momentum rates. For instance, the vector (-3+2+7-4X+5-2-6Y) describes a network with an initial weight of -0.3 between the left hidden unit and the left input unit. Likewise, "+5" in the sixth position represents an initial bias of 0.5 to the right hidden unit. The letters "X" and "Y" indicate that the corresponding weight is varied along the X- or Yaxis from -10.0 to +10.0 in steps of 0.1. All the figures in this paper contain the results of 40,000 runs of back-propagation (i .e.200 pixels by 200 pixels) for up to 200 epochs (where an epoch consists of 4 training examples). Figures 12 and 13 present a closer look at the sensitivity of back-propagation to initial condition~. These figures zoom into a complex region of Figure 11; the captions list the location of the origin and step size used to generate each picture. Sensitivity behavior can also be demonstrated with even simpler functions. Take the case of a 2-2-1 network learning the or function. Figure 14 shows the effect of learning "or" on networks (+5+5-1X+5-1 Y+3-l) and varying weights 4 (X-axis) and 7 (Y-axis) from -20.0 to 20.0 in steps of 0.2. Figure 15 shows the same region, except that it partitions tbe display according to equivalent solution networks after t-convergence (200 epoch limit), rather than the time to convergence. Two networks are considered equivalent2if their weights have the same sign. Since there are 9 weights, there are 512 ($2 sup 9$) possible network equivaJence classes. Figures 16 through 25 show successive zooms into the central swirl identified by the XY coordinate of the lower-left comer and pixel step size. After 200 iterations, the resulting networks could be partitioned into 37 (both convergent and nonconvergent) classes. Obviously, the smooth behavior of the t-convergence plots can be deceiving, since two initial conditions, arbitrarily alike, can obtain quite different final network configuration. Note the triangles appearing in Figures 19, 21, 23 and the mosaic in Figure 25 corresponding to tbe area which did not converge in 200 iteration~ in Figure 24. The triangular boundaries are similar to fractal structures generated under iterated function systems (Bamsley 1988): in this case, the iterated function is the back propagation 2. For rendering purposes only. It is extremely difficult to know precisely the equivalence classes of solutions, so we approximated. 863 864 Kolen and Pollack ... ;.:.:-..:.:.:.....;.;..... .;.......:.....;...:...;.?.:...:.?.; ;.:.?.;.;.;.;...;.;...?.;. Figure 14: (-20.00000, -20.00000) Step 0.200000 Figure 15: Solution Networks Figure 16: (-4.500000, -4.500000) Step 0.030000 Figure 17: Solution Networks Figure 18: (-1.680000, -1.3500(0) Step 0.002400 Figure 19; Solution Networks Figure 20: (-1.536000, -1.197000) Step 0.000780 Figure 21 ; Solution Networks Figure 22: (-1.472820, -1.145520) Step 0.000070 Figure 23 : Solution Networks Figure 24: (-1.467150, -1.140760) Step 0.000016 Figure 25 : Solution Networks Back Propagation is Sensitive to Initial Conditions Weight 1 Weight 2 Weight 3 Weight 4 Weight 5 Weight 6 Weight 7 Weight 8 Weight 9 Weight 10 Weight 11 Weight 12 Weight 13 Weight 14 Weight 15 Weight 16 Figure 26 -0.34959000 0.00560000 -0.26338813 0.75501968 0.47040862 -0.18438011 0.46700363 -0.48619500 0.62821201 -0.90039973 0.48940201 -0.70239312 -0.95838741 0.46940394 -0.73719884 0.96140103 Figure 28 -0.34959000 0.00560000 0.39881098 -0.16718577 -0.28598450 -0.18438011 -0.06778983 0.66061292 -0.39539510 0.55021922 0.35141364 -0.17438740 -0.07619988 0.88460041 0.67141031 -0.10578894 Figure 27 Figure 29 Figure 30 -0.34959000 0.00560000 0.65060705 0.75501968 0.91281711 -0.19279729 0.56181073 0.20220653 0.11201949 0.67401200 -0.54978875 -0.69839197 -0.19659844 0.89221204 -0.56879740 0.20201484 Table 1: Network Weights for Figures 26 through 30 learning method. We propose that these fractal-like boundaries arise in back-propagation due to the existence of multiple solutions (attractors), the non-zero learning parameters, and the non-linear deterministic nature of the gradient descent approach. When more than one hidden unit is utilized, or when an envirorunent has internal symmetry or is very undercollstrained. then there will be multiple attmctors corresponding to the large num ber of hidden-unit permutations which form equivalence classes of functionality. As the number of solutions available to the gradient descent method increases, the more complicated the non-local interactions between them. This explains the puzzling result that several researchers have noted, that as more hidden units are added, instead of speeding up, back-propagation slows down (e.g. Uppman and Gold, 1987). Rather than a hill-climbing metaphor with local peaks to get stuck on, we should instead think of a many-body metaphor: The existence of many bodies does not imply that a particle will take a simple path to land on one. From this view, we see that Rumelhart et al. 's claim of back-propagation usually converging is due to a very tight focus inside the "eye of the stonn" . Could learning and momentum rates also be involved in the stonn? Such a question prompted another study, this time focused on the interaction of learning and momentum rates. Rather than alter the initial weights of a set of networks, we varied the learning rate along the X axis and momentum rate along the Y axis. Figures 26, 27, and 28 were produced by training a 3-3-1 network on 3-bit parity until t-convergence (250 epoch limit). Table 1 lists the initial weights of the networks trained in Figures 26 through 31. Examination of the fuzzy area in Figure 26 shows how small changes in learning and/or momentwn rate can drasticly affect t-convergellce (Figures 30 and 31). 865 866 Kolen and Pollack Figure 26: 11=(0.0,4.0) a=(0.0,1.25) Figure 27: 11=(0.0,4.0) a=(0.0,1.25) Figure 29: 11=(3.456,3.504) a=(0.835,0.840) 4 Figure 28: 11=(0.0,4.0) a=(0.0,1.25) Figure 30: 11=(3.84,3.936) a=(0.59,0.62) DISCUSSION Chaotic behavior has been carefully circumvented by many neural network researchers (through the choice of symmetric weights by Hopfield (1982), for example), but has been reported in increasing frequency over the past few years (e.g. Kurten and Clark, 1986). Connectionists, who use neural models for cognitive modeling, disregard these reports of extreme non-linear behavior in spite of common knowledge that non-linearity is what enables network models to perform non-trivial computations in the flI'St place, All work to date has noticed various forms of chaos in network dynamics, but not in learning dynamics. Even if back-propagation is shown to be non-chaotic in the limit, this still does not preclude the existance of fractal boundaries between attract or basins since other nonchaotic non-linear systems produce such boundaries (i.e. forced pendulums with two attractors (D'Humieres et al., 1982? What does this mean to the back-propagation community? From an engineering applications standpoint, where only the solution matters, nothing at all. When an optimal set of weights for a particular problem is discovered, it can be reproduced through digital means. From a scientific standpoint, however, this sensitivity to initial conditions demands that neural network learning results must be specially treated to guarantee replicability. When theoretical claims are made (from experience) regarding the power of an adaptive Back Propagation is Sensitive to Initial Conditions network to model some phenomena, or when claims are made regarding the similarity between psychological data and network performance, the initial conditions for the network need to be precisely specified or filed in a public scientific database. What about the future of back-propagation? We remain neutral on the issue of its ultimate convergence, but our result points to a few directions for improved methods. Since the slowdown occurs as a result of global influences of multiple solutions, an algorithm for first factoring the symmetry out of both network and training environment (e.g. domain knowledge) may be helpful. Furthermore, it may also tum out that search methods which harness "strange attractors" ergodicaUy guaranteed to come arbitrarily close to somesubset of solutions might work better than methods based on strict gradient descent. Finally, we view this result as strong impetus to discover how to exploit the information-creative aspects of non-linear dynamical systems for future models of cognition (Pollack 1989). Acknowledgements This work was supported by Office of Naval Research grant number NOOOI4-89-Jl200. Substantial free use of over 200 Sun workstations was generously provided by our department. References M. BamsleY,Fractals Everywhere, Academic Press, San Diego, CA, (1988). J. J. Hopfield, "Neural Networks and Physical Systems with Emergent Collective Computational Abilities", Proceedings US National Academy 0/ Science, 79:2554-2558, (1982). D. D'Humieres, M. R. Beasley, B. A. Huberman, and A. Libchaber, "Chaotic States and Routes to Chaos in the Forced Pendulum", Physical Review A, 26:3483-96, (1982). S. Judd, "Learning in Networks is Hard", Journal o/Complexity, 4:177-192, (1988). J. Kolen and A. Goel, "Learning in Parallel Distributed Processing Networks: Computational Complexity and Information Content", IEEE Transactions on Systems, Man, and Cybernetics, in press. K. E. KUrten and J. W. Clark, "Chaos in Neural Networks", Physics Letters, 114A,413418, (1986). R. P. Lippman and B. Gold, "Neural Oassifiers Useful for Speech Recognition", In 1st International Conference on Neural Networks ,IEEE, IV:417-426, (1987). M. L. Minsky and S. A. Papert, Perceptrons. MIT Press, (1988). J. B. Pollack, "Implications of Recursive Auto Associative Memories", In Advances ;12 Neural Information Processing Systems. (ed. D. Touretzky) pp 527-536, Morgan Kaufman, San Mateo, (1989) . D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning Representation by BackPropagating Errors", Nature, 323:533-536, (1986). 867
395 |@word trial:1 grey:1 minus:1 initial:31 configuration:3 past:1 must:2 john:1 partition:1 enables:1 plot:2 v:1 intelligence:1 tenn:1 plane:1 vanishing:1 num:1 node:1 location:1 successive:1 simpler:1 along:3 consists:1 inside:1 behavior:7 examine:1 metaphor:2 preclude:1 increasing:1 project:1 discover:1 linearity:1 provided:1 what:3 kaufman:1 fuzzy:1 guarantee:1 exactly:1 unit:6 grant:1 appear:1 engineering:1 local:3 limit:5 solely:1 path:1 might:2 plus:1 black:1 mateo:1 equivalence:2 collect:1 fastest:1 gone:1 range:3 decided:1 recursive:1 implement:1 existance:1 chaotic:3 lippman:1 area:2 empirical:2 significantly:1 convenient:1 spite:1 get:2 cannot:1 close:1 selection:1 influence:1 equivalent:1 deterministic:3 map:1 demonstrated:1 williams:1 starting:1 focused:1 swirl:1 oh:1 coordinate:1 increment:2 diego:1 caption:1 mosaic:1 origin:3 rumelhart:3 approximated:1 recognition:1 utilized:1 beasley:1 database:1 rwnelhart:1 region:2 sun:1 substantial:1 environment:3 complexity:2 dynamic:2 trained:3 tight:1 upon:1 triangle:1 comer:2 compactly:1 hopfield:2 emergent:1 various:1 talk:1 train:1 forced:2 monte:2 artificial:1 quite:2 encoded:1 larger:1 otherwise:1 triangular:1 ability:1 think:1 final:1 reproduced:1 obviously:1 associative:1 propose:1 interaction:2 date:1 impetus:1 gold:2 academy:1 demon:1 convergence:19 produce:1 envirorunent:1 depending:1 oo:2 strong:1 indicate:1 come:1 convention:1 direction:2 correct:1 functionality:1 public:1 explains:1 argued:1 considered:1 great:1 cognition:1 claim:5 vary:1 purpose:2 sensitive:5 mit:1 generously:1 rather:3 varying:2 office:1 pervasive:1 ax:2 focus:1 naval:1 helpful:1 factoring:1 attract:1 initially:1 hidden:5 pixel:3 issue:1 initialize:1 field:2 equal:3 represents:1 look:1 deceiving:1 alter:1 future:2 np:1 report:1 few:2 national:1 weigbts:1 zoom:2 minsky:2 attractor:3 extreme:2 implication:1 closer:1 necessary:1 experience:2 xy:3 perfonnance:1 orthogonal:1 iv:1 pollack:7 theoretical:1 psychological:1 instance:1 modeling:1 boolean:1 neutral:1 uniform:1 reported:1 varies:1 confident:1 st:2 explores:1 sensitivity:5 peak:1 filed:1 international:1 physic:1 central:1 choose:1 worse:1 cognitive:1 derivative:1 kolen:7 bamsley:2 availability:1 matter:1 performed:1 break:1 view:2 sup:1 pendulum:2 complicated:1 parallel:1 om:1 formed:1 who:1 likewise:1 climbing:1 weak:1 identification:1 iterated:2 produced:1 carlo:2 teclmique:1 researcher:3 cybernetics:1 published:1 explain:1 touretzky:1 ed:1 sixth:1 frequency:1 involved:1 pp:1 proof:1 associated:1 unsolved:1 workstation:1 noooi4:1 knowledge:3 carefully:1 back:20 feed:1 tum:1 harness:1 improved:1 furthermore:1 until:1 hand:1 replicability:1 propagation:20 columbus:1 scientific:2 usa:1 effect:3 consisted:1 ranged:1 contain:1 symmetric:1 laboratory:1 illustrated:1 white:1 backpropagating:1 noted:1 hill:1 demonstrate:1 percent:1 meaning:1 chaos:3 ohio:3 sigmoid:1 common:1 physical:2 libchaber:1 significant:1 refer:1 particle:1 similarity:1 route:1 arbitrarily:2 morgan:1 minimum:1 additional:1 goel:2 converge:1 determine:1 semi:1 multiple:3 mold:1 smooth:1 faster:1 academic:1 long:1 schematic:2 converging:1 iteration:2 kurten:2 standpoint:2 specially:1 strict:1 regularly:1 jordan:1 easy:1 rendering:1 affect:1 identified:2 regarding:2 whether:1 ultimate:1 speech:1 fractal:4 useful:1 generate:1 percentage:2 sign:1 utilize:1 year:2 tbe:3 run:1 letter:2 everywhere:1 place:1 throughout:1 strange:1 bit:1 guaranteed:1 convergent:3 display:1 precisely:2 nonchaotic:1 scene:1 aspect:1 answered:1 extremely:1 circumvented:1 department:1 according:3 creative:1 combination:2 poor:1 describes:1 remain:1 partitioned:1 alike:1 resource:1 know:1 available:1 appearing:1 existence:2 unnoticed:1 yx:1 exploit:1 giving:1 eliciting:1 noticed:1 question:2 added:1 occurs:1 exclusive:4 exhibit:1 gradient:3 separate:1 link:1 argue:1 trivial:1 reason:1 relationship:1 prompted:1 difficult:1 unfortunately:1 statement:1 slows:1 rise:1 collective:1 perform:1 descent:3 displayed:1 hinton:1 variability:1 precise:1 discovered:1 varied:2 community:1 specified:2 connectionists:1 usually:3 pattern:1 below:2 dynamical:1 memory:1 power:1 perfonned:1 difficulty:1 examination:1 treated:1 representing:1 imply:1 eye:1 picture:2 axis:4 auto:1 speeding:1 epoch:6 review:1 acknowledgement:1 permutation:1 interesting:1 clark:2 digital:1 sufficient:1 basin:1 surrounded:1 land:1 supported:1 parity:1 slowdown:1 free:1 bias:1 understand:1 ber:1 distributed:1 slice:4 boundary:4 dimension:1 judd:2 default:1 calculated:1 forward:1 made:3 stuck:2 adaptive:1 san:2 transaction:1 yaxis:1 global:2 conclude:1 search:1 why:1 table:2 learn:1 nature:2 ca:1 symmetry:5 complex:1 fli:1 domain:2 did:1 linearly:1 arise:1 nothing:1 body:2 papert:2 momentum:6 position:1 wish:2 breaking:1 down:1 specific:1 nonconvergent:2 list:2 exists:1 effectively:1 ci:2 magnitude:1 demand:1 presentation:1 man:1 content:1 change:1 hard:1 except:1 huberman:1 disregard:1 perceptrons:1 rarely:1 puzzling:1 internal:1 tested:1 phenomenon:1
3,257
3,950
A Log-Domain Implementation of the Diffusion Network in Very Large Scale Integration Yi-Da Wu, Shi-Jie Lin, and Hsin Chen Department of Electrical Engineering National Tsing Hua University Hsinchu, Taiwan 30013 {ydwu;hchen}@ee.nthu.edu.tw Abstract The Diffusion Network(DN) is a stochastic recurrent network which has been shown capable of modeling the distributions of continuous-valued, continuoustime paths. However, the dynamics of the DN are governed by stochastic differential equations, making the DN unfavourable for simulation in a digital computer. This paper presents the implementation of the DN in analogue Very Large Scale Integration, enabling the DN to be simulated in real time. Moreover, the logdomain representation is applied to the DN, allowing the supply voltage and thus the power consumption to be reduced without limiting the dynamic ranges for diffusion processes. A VLSI chip containing a DN with two stochastic units has been designed and fabricated. The design of component circuits will be described, so will the simulation of the full system be presented. The simulation results demonstrate that the DN in VLSI is able to regenerate various types of continuous paths in real-time. 1 Introduction In many implantable biomedical microsystems [1, 2], an embedded system capable of recognising high-dimensional, time-varying signals have been demanded. For example, recognising multichannel neural activity on-line is important for implantable brain-machine interfaces to avoid transmitting all data wirelessly, or to control prosthetic devices and to deliver bio-feedbacks in realtime [3]. The Diffusion Network (DN) proposed by Movellan is a stochastic recurrent network whose stochastic dynamics can be trained to model the probability distributions of continuous-time paths by the Monte-Carlo Expectation-Maximisation (EM) algorithm [4, 5]. As stochasticity is useful for generalising the natural variability in data [6, 7], the DN is further shown suitable for recognising noisy, continuous-time biomedical data [8]. However, the stochastic dynamics of the DN is defined by a set of continuous-time, stochastic differential equations (SDEs). The speed of simulating stochastic differential equations in a digital computer is inherently limited by the serial processing and numerical iterations of the computer. Translating the DN into analogue circuits is thus of great interests for simulating the DN in real time by exploiting the natural, differential current-voltage (I-V) relationship of capacitors [9]. This paper presents the implementation of the DN in analogue Very Large Scale Integration (VLSI). To minimise the power consumption, the power supply voltage is only 1.5V, and most transistors are operated in subthreshold regions. As the reduced supply voltage limits directly the dynamic range available for voltages across capacitors, the log-domain representation proposed in [10] is applied to the DN, allowing diffusion processes to be simulated in a limited voltage ranges. After a brief 1 introduction to the DN, the following sections will derive the log-domain representation of the DN and describe its corresponding implementation in analogue VLSI. ?jj ?(xi ) EXP xj ?ij ?ij ? ?j ?ji xi xk ? dB dt ?j xof f Figure 1: The architecture of a Diffusion Network with one visible and two hidden units ?jj xj EXP ?kk ?j ?j + ??ij ?i VXj ?ii 2 EXP EXP CXj ?j Is ?(xj ) Is xof f Figure 2: The block diagram of a DN unit in VLSI The Diffusion Network As shown in Fig. 1, the DN comprises n continuous-time, continuous-valued stochastic units with fully recurrent connections. The state of the j th unit at time t, xj (t), is governed by  dB(t) dxj (t) = ?j xj (t) + ? ? dt dt (1) where ?j (t) is a deterministic drift term given in (2), ? a constant, and dB(t) the Brownian motion. The Brownian motion introduces the stochasticity, enriching greatly the representational capability of the DN [5].   n X   ?ij ? ? xi (t) (2) ?j xj (t) = ?j ? ??j xj (t) + ?j + i=1 ??1 j ?ij defines the connection weight from unit i to unit j. and ??1 j represent the input capacitance th and transmembrane resistance, respectively, of the j unit. ?j is the input bias, and ? is the sigmoid function given as a  2 ?(xj ; a) = ?1 + = tanh xj (3) 1 + e?axj 2 where a adapts the slope of the sigmoid function. As shown in Fig. 1, the DN contains both visible(white) and hidden(grey) stochastic units. The learning of the DN aims to regenerate at visible units the probability distribution of a specific set of continuous paths. The number of visible units thus equals the dimension of the data to be modeled, while the minimum number of hidden units required for modeling data satisfactorily is identified by experimental trials. During training, visible units are ?clamped? to the dynamics of the training dataset, and the dynamics of hidden units are Monte-Carlo sampled for estimating optimal parameters (?ij , ?j , ?j , ?j ) that maximise the expectation of training data [5]. After training, all units are given initial values at t = 0 only to sample the dynamics modeled by the DN. The similarity between the dynamics of visible units and those of training data indicate how well the DN models the data. 2.1 Log-domain translation To maximise the dynamic ranges for diffusion processes in VLSI, the stochastic state xj (t) is represented as a current and then logarithmically-compressed into a voltage VXj in VLSI [11]. The logarithmic compression allows xj (t) to change over three decades within a limited voltage range for VXj . The voltage representation VXj further facilitates the exploitation of the nature, differential (I-V) relationship of a capacitor to simulate SDEs in real-time and in parallel. 2 The logarithmic relationship between xj (t) and VXj can be realised by the exponential I-V characteristics of a MOS transistor in subthreshold operation [12]. To keep xj (t) a non-negative value (current) in VLSI, an offset xof f is added to xj (t), resulting in the following relationship between xj (t) and VXj . xj + xof f ? IS ? e?VXj , dxj = ?IS ? e?VXj ? dVXj (4) where Is and ? are process-dependent constants extractable from simulated I-V curves of transistors. Substituting Eq. (4) into Eq. (1) then translates the diffusion process in Eq. (1) into the following equation.   n X dVXj ? dBj (t) ??VXj CXj ? = ?j + ?ij ?(xi ) ? e??VXj + ?e + ?j xof f ? e??VXj ? ?j IS dt ? dt j i=1 (5) where CXj equals ?/?j . Fig. 2 illustrates the block diagram for implementing Eq. (5) in VLSI. CXj is a capacitor and VXj the voltage across the capacitor. Each term on the right hand side of Eq. (5) then corresponds to a current flowing into CXj . Let (VP ? VN ) and IV AR represent the differential input voltage and the input current of an EXP-element, respectively. Each EXP-element in Fig. 2 produces an output current of Iout = IV AR ? e?(VP ?VN ) . Therefore, the EXP-elements implement the first three terms multiplied with e??VXj in accordance with Eq. (5). The last term, ?j IS , is a constant and is thus implemented by a constant current source. Finally, the Pnsigmoid circuit transforms xj into ?(xj ) and the multipliers output a total current proportional to i=1 ?ij ? ?(xi ). 7 6 5 4 3 7 6 5 4 3 0 100 200 300 Time samples 400 500 0 100 200 300 400 500 600 700 800 900 1000 Time samples Figure 3: The stochastic dynamics (gray lines) regenerated by the DN trained on the bifurcating curves (black lines). Figure 4: The stochastic dynamics (gray lines) regenerated by the DN trained on the sinusoidal curve (the black line). 10 9 8 Unit 2 7.5 6.5 5.5 4.5 3.5 2.5 6 5 4 0 20 40 Time samples 60 80 Figure 5: The stochastic dynamics (gray lines) regenerated by the DN trained on the QRS segments of electrocardiograms (black lines). 2.2 7 3.5 4.5 5.5 6.5 Unit 1 7.5 8.5 Figure 6: The stochastic dynamics (gray lines) regenerated by the DN trained on the handwritten ? (the black line). Adapting ?j instead of ?j The DN has been shown capable of modeling various distributions of continuous paths by adapting wij , ?j , and ?j in [5]. An adaptable ?j corresponds to an adaptable CXj , but a tunable capacitor with a wide linear range is not easy to implement in VLSI. As Eq. (2) indicates that ?j is complementary 3 to ?j in determining the ?time constant? of the dynamics of the unit j, the possibility of adapting ?j instead of ?j is investigated by Matlab simulation. With ?j = 1, the DN was trained to model different data by adapting ?ij , ?j , and ?j for 100 epochs. A DN with one visible and one hidden units was proved capable of regenerating the dynamics of bifurcating curves (Fig. 3), sinusoidal waves (Fig. 4), and electrocardiograms (Fig. 5). Moreover, a DN with only two visible units was able to regenerate the handwritten ? satisfactorily, as illustrated in Fig. 6. The promising results supported the suggestion that adapting ?j instead of ?j also allowed the DN to model different data. As a variable ?j simply corresponded to a tunable current source ?j IS in Fig. 2, the VLSI implementation was greatly simplified. 2.3 Parameter mappings Table 1 summarises the parameter mappings between the numerical simulation and the VLSI implementation. All variables except for VXj in Fig. 2 are represented as currents in VLSI. The unit currents (Iunit ) of xj , ?ij , and ?j are defined as 10 nA to match the current scales of transistors in subthreshold operation, as well as to reduce the power consumption. Moreover, extensive simulations indicate that the dynamic ranges required for modeling various data are [?3, 5] for xj and [?30, 30] for ?ij . With xof f = 5 in Eq. (4), i.e. xof f = 50nA in VLSI, VXj ranges from 773 to 827 mV. While the diffusion process in Eq. (1) is iterated with ?t = 0.05 in numerical simulation, ?t = 0.05 is set to be 5 ?s in VLSI, corresponding to a reasonable sampling rate (200kHz) at which most instruments can sample multiple channels(units) simultaneously. Finally, the unit capacitance for 1/?j is calculated as Cunit = Iunit ? ?tunit /VXj,unit , equaling 1 pF and resulting in CXj = ? ? Cunit = 30 pF. Table 1: Parameter mappings between numerical simulation and VLSI implementation parameter xj xof f VXj ?, ? ?(xj ) CXj ?t ? 3 numeric -3?5 5 0.773?0.827 -30?30 -1?1 ?/?j = 30 0.05 0.5?2 circuit -30?50 nA 50 nA 773?827 mV -300?300 nA -400?400 nA 30 pF 5 ?s 0.5?2 comment Iunit = 10 nA offset term in Eq. (4) VXj,unit = 1 V Iunit = 10 nA activation function Cunit = 1 pF tunit = 0.1 ms Circuit implementation A DN with two stochastic units have been designed with the CMOS 0.18 ?m technology provided by the Taiwan Semiconductor Manufacturing Company (TSMC). The following subsections introduce the design of each component circuit. 3.1 The EXP element Fig. 7(b) shows the schematics of the EXP element. With M1 and M2 operated in the subthreshold region, the output current is given as   1 (VP ? VN ) (6) Iout = IB ? exp nUT where UT denotes the thermal voltage and n the subthreshold slope factor. Comparing Eq. (6) with Eq. (4) reveals that ? = 1/nUT . As the drain current (Id ) of a transistor in subthreshold operation is exponentially proportional to its gate-to-source voltage (VGS ) as Id ? eVGS /nUT , ? = 1/nUT is extracted to be 30 by plotting log(Id ) versus VGS in SPICE. Transistors M3-M5 form an active biasing circuit that sinks IB + Iout . By adjusting the gate voltage of M3 through the negative feedback, Iout is allowed to change over several decades. In addition, 4 IOU T VP VN M7 IOU T EXP 0 IOU T IB VN VP M1 IB 0 IOU T IB IOU T IB M5 IV AR n actually depends on the gate voltage and introduces variability to ? [13]. To prevent the variable ? from introducing simulation errors, all EXP elements of the DN unit are biased with a constant IB = 100 nA. As shown by Fig. 7(a), Iout of each element is then re-scaled by the one-quadrant 0 current multiplier basing on translinear loops (Fig. 7(c)) [13] to produce Iout = Iout ? IV AR /IB , where IV AR represents the current input to each element in Fig. 2 (e.g.??? or ?xof f ). IV AR M2 M3 M4 M1 VS M2 Vbiasn Vbiasn Vref M4 M6 M3 (a) M5 (b) (c) Figure 7: The circuit diagram of the EXP element. 3.2 Current multipliers Four-quadrant multipliers basing on translinear loops [13] are employed to calculate ??ij ?(xi ) in Eq. (5). Both ?ij and ?(xi ) are represented by differential currents as ?ij = I?+ ? I?? , ?(xi ) = I?+ ? I?? (7) Let the differential current (IZ+ ? IZ? ) represents the multiplier?s output and IU represent a unit current. Eq. (8) indicates that the four-quadrant multiplication can be composed of four one-quadrant multipliers in Fig. 7(c), as illustrated in Fig. 8. IZ+ ? IU ? IZ? ? IU = (I?+ ? I?+ + I?? ? I?? ) ? (I?+ ? I?? + I?? ? I?+ ) (8) I?+ IU IZ? Figure 8: The four-quadrant current multiplier 5 I?? IZ+ IU I?+ IU I?? I?+ IU I?? I?+ I?? Fig. 9 shows the simulation result of the four-quadrant multiplier, exhibiting satisfactory linearity over the dynamic ranges required in Table 1. (IZ+ ? IZ? ) in nA 500 = ?400nA = ?300nA = ?200nA = ?100nA 300 100 0 ?100 ?i ?i ?i ?i ?200 ?400 = = = = ?200 gain=0.8 gain=1.0 gain=3.0 gain=5.0 400 Output current in nA ?i ?i ?i ?i 200 ?i = 0 100nA 200nA 300nA 400nA 200 100 0 -100 -200 -300 -400 0 200 -500 -600 400 -400 -200 0 200 Input current in nA (I?+ ? I?? ) in nA Figure 9: The simulation results of the fourquadrant current multiplier 3.3 400 600 Figure 10: The simulation result of the sigmoid circuit with different Va Sigmoid function ?(?) Fig. 11 shows the block diagram for implementing the sigmoid function in Eq. (3). The current IXi representing xi is firstly converted into a voltage Vi by the the operational amplifier(OPA) with a voltage-controlled active resistor (VCR) proposed in [14]. Vi is then sent to an operational transconductance amplifier(OTA) in subthreshold operation, producing an output current of Is = IB tanh   1 (Vi ? Vref ) 2nUT (9) Since Vi ? Vref = Ri ? Ixi , with Ri representing the resistance of the VCR, the voltage Va adapts Ri and thus the slope of the sigmoid function. Finally, the 2nd generation current conveyor (CCII) in Fig. 12 [15] converts the current Is into a pair of differential currents (IOU T N , IOU T P ) ranging between ?400 nA and +400 nA. The differential currents are then duplicated for the inputs of four-quadrant multipliers of all DN units. Va VCR IXi IOU T P OPA Vref CCII OTA Vref IOU T N Vref Figure 11: The block diagram of the sigmoid circuit. 3.4 Capacitor amplification As CXi = 30 pF requires considerable chip area, CXi is implemented by the circuit in Fig. 13, utilising the Miller effect to amplify the capacitance. Let A denote the gain of the amplifier. The effective capacitance between X and Y is (1 + A) ? CX . Fig. 13 also shows the schematics of the amplifier whose gain is designed to be 2. As a result, CX = 10 pF is sufficient for providing an effective CXi of 30 pF. VBIAS CX VREF X 4/4x16 M1 X ?A Y Y 4/4x1 CEQ = CX (1 + A) M2 Figure 13: The circuit diagram of the capacitor amplified by the Miller effect. 6 Vbiasp 1.2V IP IOU T N Isig 0.3V VY VX 1.2V IN OPA VREF IOU T P Vbiasn 0.3V Figure 12: The circuit diagram of the single-to-differential current conveyor Technology 1P6M 0.18 ?m CMOS Power Supply 1.5 Volts Power Consumption 345 ?Watts Num. of Units 2 1.368?1.368mm2 Chip Area (including pads) Capability 1D/2D continuous paths Max. Bandwidth 1.6 kHz Figure 14: The chip layout and its specification. 55 50 IX1 in ?A IX1 in ?A 50 40 30 45 40 35 30 0 0.5 1 1.5 2 0 Time in ms 0.05 0.1 0.15 0.2 0.25 0.3 Time in ms Figure 16: The electrocardiogram dynamics regenerated by the DN chip in post-layout simulation (10 trials). Figure 15: The sinusoidal dynamics regenerated by the DN chip in post-layout simulation (10 trials). 7 70 60 IX2 in ?A 50 IX1 in ?A 60 50 40 40 30 20 30 10 10 15 20 25 30 35 40 45 50 55 20 0 0.5 1 1.5 2 2.5 IX1 in ?A Time in ms Figure 17: The bifurcating dynamics regenerated by the DN chip in post-layout simulation (8 trials). 4 Figure 18: The handwritten ? regenerated by the DN chip in post-layout simulation (10 trials). The Diffusion Network in VLSI Fig. 14 shows the chip layout of the log-domain implementation of the DN with two stochastic units, so is the specification shown. The area of the core circuit and the capacitors are 0.306 mm2 and 0.384 mm2 , respectively. The total power consumption is merely 345 ?W, by the merit of low supply voltage (1.5V) and subthreshold operation. The chip has been taped out for fabrication with the CMOS 0.18 ?m Technology by the TSMC. The post-layout simulations are shown in Fig. 15?18 and described as follows. With one unit functioning as a visible unit and the other as a hidden unit, the parameters of the DN was programmed to regenerate the one-dimensional paths in Sec. 2.2. The noise current ?? ? dB dt was simulated by a piecewise-linear current source with random amplitudes in the SPICE. As shown by Fig. 15-17, the visible unit was capable of regenerating the sinusoidal waves, the electrocardiograms, and the bifurcating curves with negligible differences from Fig. 3-5. Moreover, as both units functioned as visible units, the DN was capable of regenerating the handwritten ? as Fig. 18. These promising results demonstrate the capability of the DN chip to model the distributions of different continuous paths reliably and power-efficiently. After chip is fabricated in August, the chip will be tested and the measurement results will be presented in the conference. 5 Conclusion The log-domain representation of the Diffusion Network has been derived and translated into analogue VLSI circuits. Based on well-defined parameter mappings, the DN chip is proved capable of regenerating various types of continuous paths, and the log-domain representation allows the diffusion processes to be simulated in real-time and within a limited dynamic range. In other words, analogue VLSI circuits are proved useful for solving (simulating) multiple SDEs in real-time and in a power-efficient manner. After verifying the chip functionality, a DN chip with a scalable number of units will be further developed for recognising multi-channel, time-varying biomedical signals in implantable microsystems. Acknowledgments The authors thank National Chip Implementation Center (CIC) for fabrication services, and Mr. C.-M. Lai and S.-C. Sun for helpful discussions. 8 References [1] G. Iddan, G. Meron, A. Glukhovsky, and P. Swain, ?Wireless capsule endoscopy,? Nature, vol. 405, no. 6785, p. 417, July 2000. [2] T. W. Berger, M. Baudry, J.-S. L. Roberta Diaz Brinton, V. Z. Marmarelis, A. Y. Park, B. J. Sheu, and A. R. Tanguay, JR., ?Brain-implantable biomimetic electronics as the next era in neural prosthetics,? Proc. IEEE, vol. 89, no. 7, pp. 993?1012, July 2001. [3] M. A. Lebedev and M. A. L. Nicolelis, ?Brain-machine interfaces: past, present and future,? Trends in Neuroscience, vol. 29, no. 9, pp. 536?546, 2006. [4] J. R. Movellan, ?A learning theorem for networks at detailed stochastic equilibrium,? Neural Computation, vol. 10, pp. 1157?1178, July 1998. [5] J. R. Movellan, P. Mineiro, and R. J.Williams, ?A Monte Carlo EM approach for partially observable diffusion processes: Theory and applications to neural networks,? Neural Computation, vol. 14, pp. 1507?1544, July 2002. [6] H. Chen and A. F. Murray, ?A continuous restricted Boltzmann machine with an implementable training algorithm,? IEE Proc. of Vision, Image and Signal Processing, vol. 150, no. 3, pp. 153?158, 2003. [7] D. F. Specht, ?Probabilistic neural networks,? Neural Networks, vol. 3, no. 1, pp. 109?118, 1990. [8] Y. S. Hsu, T. J. Chiu, and H. Chen, ?Real-time recognition of continuous-time biomedical signals using the diffusion network,? in Proc. of the Int. Joint Conf. on Neural Networks (IJCNN), 2008, pp. 2628?2633. [9] L. O. Chua, T. Roska, T. Kozek, and A. Zarandy, ?CNN universal chips crank up the computing power,? IEEE Circuits and Devices Mag., vol. 12, no. 4, pp. 18?28, July 1996. [10] T. Serrano-Gotarredona and B. Linares-Barranco, ?Log-domain implementation of complex dynamics reaction-diffusion neural networks,? IEEE Trans. Neural Networks, vol. 14, pp. 1337?1355, Sept. 2003. [11] D. R. Frey, ?Exponential state space filters: A generic current mode design strategy,? IEEE Trans. Circuits Syst. I, vol. 43, pp. 34?42, Jan. 1996. [12] E. Vittoz and J. Fellrath, ?CMOS analog integrated circuits based on weak inversion operation,? IEEE J. Solid-State Circuits, vol. 12, pp. 224?231, June 1977. [13] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbr?uck, and R. Douglas, Analog VLSI: Circuits and Principles. The MIT Press, 2002. [14] M. Banu and Y. Tsividis, ?Floating voltage-controlled resistors in CMOS technology,? Electronics Letters, vol. 18, no. 15, pp. 678?679, July 1982. [15] C. Toumazou, F. J. Lidgey, and D. G. Haigh, Analogue IC Design: The Current-Mode Approach. Peter Peregrinus Ltd, 1990. 9
3950 |@word tsing:1 trial:5 exploitation:1 cnn:1 inversion:1 compression:1 xof:9 nd:1 grey:1 simulation:17 solid:1 electronics:2 liu:1 contains:1 initial:1 mag:1 past:1 reaction:1 current:35 comparing:1 activation:1 regenerating:4 numerical:4 visible:11 ota:2 sdes:3 designed:3 v:1 device:2 xk:1 core:1 chua:1 num:1 firstly:1 dn:46 differential:11 supply:5 m7:1 manner:1 introduce:1 multi:1 brain:3 company:1 pf:7 provided:1 estimating:1 moreover:4 linearity:1 circuit:21 vref:8 developed:1 fabricated:2 scaled:1 control:1 unit:39 bio:1 producing:1 maximise:2 engineering:1 accordance:1 negligible:1 service:1 limit:1 semiconductor:1 frey:1 era:1 id:3 path:9 black:4 limited:4 programmed:1 range:10 enriching:1 acknowledgment:1 satisfactorily:2 maximisation:1 block:4 cxj:8 movellan:3 implement:2 vcr:3 jan:1 area:3 universal:1 adapting:5 word:1 quadrant:7 amplify:1 deterministic:1 shi:1 center:1 layout:7 williams:1 m2:4 limiting:1 ixi:3 delbr:1 logarithmically:1 element:9 trend:1 recognition:1 biomimetic:1 electrical:1 verifying:1 calculate:1 vbias:1 region:2 equaling:1 sun:1 transmembrane:1 dynamic:23 trained:6 solving:1 segment:1 deliver:1 sink:1 translated:1 joint:1 chip:18 various:4 represented:3 describe:1 effective:2 monte:3 corresponded:1 whose:2 valued:2 compressed:1 noisy:1 ip:1 transistor:6 serrano:1 loop:2 representational:1 adapts:2 amplified:1 amplification:1 exploiting:1 produce:2 cmos:5 derive:1 recurrent:3 ij:14 eq:15 implemented:2 indicate:2 vittoz:1 iou:11 exhibiting:1 functionality:1 filter:1 stochastic:18 translinear:2 vx:1 unfavourable:1 translating:1 implementing:2 cic:1 ic:1 exp:13 great:1 equilibrium:1 mapping:4 mo:1 substituting:1 proc:3 tanh:2 basing:2 mit:1 aim:1 avoid:1 varying:2 voltage:20 axj:1 derived:1 june:1 indiveri:1 prosthetics:1 indicates:2 greatly:2 helpful:1 dependent:1 integrated:1 pad:1 hidden:6 vlsi:20 wij:1 iu:7 integration:3 equal:2 sampling:1 mm2:3 represents:2 park:1 future:1 regenerated:8 piecewise:1 tunit:2 composed:1 simultaneously:1 national:2 implantable:4 m4:2 floating:1 baudry:1 continuoustime:1 amplifier:4 interest:1 possibility:1 introduces:2 operated:2 capable:7 iv:6 re:1 modeling:4 ar:6 introducing:1 swain:1 fabrication:2 iee:1 probabilistic:1 vgs:2 lebedev:1 transmitting:1 na:23 containing:1 marmarelis:1 conf:1 syst:1 converted:1 sinusoidal:4 electrocardiogram:4 sec:1 int:1 mv:2 depends:1 vi:4 hsin:1 realised:1 wave:2 capability:3 parallel:1 slope:3 characteristic:1 efficiently:1 miller:2 subthreshold:8 vp:5 weak:1 handwritten:4 iterated:1 carlo:3 ix2:1 pp:12 hsu:1 sampled:1 gain:6 dataset:1 tunable:2 proved:3 adjusting:1 duplicated:1 subsection:1 ut:1 amplitude:1 actually:1 adaptable:2 hsinchu:1 dt:6 flowing:1 biomedical:4 hand:1 defines:1 mode:2 gray:4 effect:2 multiplier:10 functioning:1 volt:1 nut:5 satisfactory:1 linares:1 illustrated:2 white:1 ceq:1 during:1 m:4 m5:3 demonstrate:2 motion:2 interface:2 ranging:1 image:1 barranco:1 dbj:1 sigmoid:7 ji:1 khz:2 exponentially:1 sheu:1 analog:2 m1:4 measurement:1 stochasticity:2 specification:2 similarity:1 brownian:2 yi:1 iout:7 minimum:1 mr:1 employed:1 july:6 signal:4 ii:1 full:1 multiple:2 match:1 lin:1 lai:1 serial:1 post:5 banu:1 va:3 schematic:2 controlled:2 scalable:1 vision:1 expectation:2 iteration:1 represent:3 addition:1 diagram:7 source:4 biased:1 tsmc:2 comment:1 db:4 facilitates:1 sent:1 capacitor:9 dxj:2 ee:1 easy:1 m6:1 xj:22 architecture:1 identified:1 bandwidth:1 reduce:1 translates:1 minimise:1 ltd:1 peter:1 resistance:2 jj:2 matlab:1 jie:1 useful:2 detailed:1 transforms:1 multichannel:1 reduced:2 spice:2 vy:1 neuroscience:1 diaz:1 iz:8 vol:12 four:6 prevent:1 douglas:1 diffusion:16 merely:1 convert:1 utilising:1 letter:1 taped:1 reasonable:1 wu:1 vn:5 realtime:1 activity:1 ijcnn:1 ri:3 prosthetic:1 speed:1 simulate:1 transconductance:1 extractable:1 department:1 watt:1 jr:1 across:2 qrs:1 em:2 tw:1 making:1 restricted:1 equation:4 merit:1 specht:1 instrument:1 available:1 operation:6 multiplied:1 generic:1 simulating:3 gate:3 denotes:1 murray:1 summarises:1 capacitance:4 added:1 strategy:1 thank:1 simulated:5 consumption:5 nthu:1 taiwan:2 ix1:4 modeled:2 relationship:4 kk:1 providing:1 berger:1 negative:2 implementation:11 design:4 reliably:1 boltzmann:1 allowing:2 regenerate:4 enabling:1 implementable:1 bifurcating:4 thermal:1 variability:2 august:1 drift:1 pair:1 required:3 crank:1 extensive:1 connection:2 functioned:1 roska:1 trans:2 able:2 microsystems:2 biasing:1 including:1 max:1 analogue:7 power:10 suitable:1 natural:2 nicolelis:1 representing:2 technology:4 brief:1 sept:1 epoch:1 drain:1 multiplication:1 determining:1 embedded:1 fully:1 suggestion:1 generation:1 proportional:2 versus:1 digital:2 sufficient:1 plotting:1 principle:1 translation:1 supported:1 last:1 wireless:1 bias:1 side:1 wide:1 feedback:2 dimension:1 curve:5 calculated:1 numeric:1 author:1 simplified:1 observable:1 keep:1 active:2 reveals:1 generalising:1 xi:9 gotarredona:1 continuous:14 demanded:1 decade:2 mineiro:1 table:3 promising:2 nature:2 channel:2 capsule:1 inherently:1 operational:2 tsividis:1 investigated:1 complex:1 domain:8 da:1 noise:1 allowed:2 complementary:1 x1:1 fig:26 cxi:3 x16:1 comprises:1 resistor:2 exponential:2 governed:2 clamped:1 ib:9 theorem:1 specific:1 offset:2 recognising:4 illustrates:1 chen:3 cx:4 logarithmic:2 simply:1 conveyor:2 partially:1 hua:1 corresponds:2 extracted:1 kramer:1 vxj:18 manufacturing:1 considerable:1 change:2 except:1 total:2 uck:1 experimental:1 m3:4 chiu:1 tested:1
3,258
3,951
Inference and communication in the game of Password Yang Xu? and Charles Kemp? Machine Learning Department? School of Computer Science? Department of Psychology? Carnegie Mellon University {[email protected], [email protected]} Abstract Communication between a speaker and hearer will be most efficient when both parties make accurate inferences about the other. We study inference and communication in a television game called Password, where speakers must convey secret words to hearers by providing one-word clues. Our working hypothesis is that human communication is relatively efficient, and we use game show data to examine three predictions. First, we predict that speakers and hearers are both considerate, and that both take the other?s perspective into account. Second, we predict that speakers and hearers are calibrated, and that both make accurate assumptions about the strategy used by the other. Finally, we predict that speakers and hearers are collaborative, and that they tend to share the cognitive burden of communication equally. We find evidence in support of all three predictions, and demonstrate in addition that efficient communication tends to break down when speakers and hearers are placed under time pressure. 1 Introduction Communication and inference are intimately linked. Suppose, for example, that Joan states that some of her pets are dogs. Under normal circumstances, a hearer will infer that not all of Joan?s pets are dogs on the grounds that Joan would have expressed herself differently if all of her pets were dogs [1]. Inferences like these have been widely studied by linguists and psychologists [2, 3, 4, 5] and are often encountered in everyday settings. One compelling explanation is presented by Levinson [4], who points out that speaking (i.e. phonetic articulation) is substantially slower than thinking (i.e. inference). As a result, communication will be maximally efficient if a speaker?s utterance leaves inferential gaps that will be bridged by the hearer. Inference, however, is not only the responsibility of the hearer. For communication to be maximally efficient, a speaker must take the hearer?s perspective into account (?if I say X, will she infer Y??). The hearer should therefore allow for inferences on the part of the speaker (?did she think that saying X would lead me to infer Y??) Considerations of this sort rapidly lead to a game-theoretic regress, and achieving efficient communication under these circumstances begins to look like a very challenging problem. Here we study a simple communication game that allows us to explore inferences made by speakers and hearers. Inference becomes especially important in settings where speakers are prevented from directly expressing the concepts they have in mind, and where utterances are constrained to be short. The television show Password is organized around a game that satisfies both constraints. In this game, a speaker is supplied with a single, secret word (the password) and must communicate this word to a hearer by choosing a single one-word clue. For example, if the password is ?mend?, then the speaker might choose ?sew? as the clue, and the hearer might guess ?stitch? in response. Figure 1 shows several examples drawn from the show?note that communication is successful in the first 1 0 0 digger separate split stitch ?2 b subtract ?4 conquer numbers heal seam ditch pick ?6 ?6 part bend spoon rip dirt work quotient ?6 ?4 ?2 ?8 ?8 0 ?6 ?4 clue:multiply; guess:divide ?2 subtract ?6 b reproduce ?4 ?2 log forward strength (Hf) 0 ?8 ?8 pwd:mend clothes pants ?6 pwd: shovel thread ?4 ?6 ?2 0 ski ?2 yarn pin fabric math rabbit ?6 add H factor ?4 clue:snow; guess:flake 0 needle ?2 division times ?6 Sf 0 pwd:divide ?8 ?8 ?8 ?8 0 clue:sew; guess:stitch 0 ?4 ?2 Sf ?4 Hf ?2 hail Hb ?8 ?8 dig tool ?4 break log forward strength (Sf) log backward strength (Hb) fix ?4 restore ?6 spade scoop ?2 Sb ?2 password:shovel; clue:snow 0 S log backward strength (Sb) password:divide; clue:multiply password:mend; clue:sew 0 white cold ?4 ?6 fall ball ?8 ?8 ?6 ?4 ?2 0 Hf Figure 1: Three rounds from the television game show Password. Given each password, the top row plots the forward (Sf : password ? clue) and backward (Sb : password ? clue) strengths for several potential clues. The clue chosen by the speaker is circled. Given this clue, the bottom row plots the forward (Hf : clue ? guess) and backward (Hb : clue ? guess) strengths for several potential guesses. The guess chosen by the hearer is circled and the password is indicated by an arrow. The first two columns represent two normal rounds, and the final column is a lightning round where speakers and hearers are placed under time pressure. The gray dots in each plot show words that are associated with the password (top row) or clue (bottom row) in the University of Southern Florida word association database. Labels for these words are included where space permits. example but not in the remaining two. The clues and guesses generated by speakers and hearers are obviously much simpler than most real-world linguistic utterances, but studying a setting this simple allows us to develop and evaluate formal models of communication. Our analyses therefore contribute to a growing body of work that uses formal methods to explore the efficiency of human communication [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. At first sight the optimal strategies for speaker and hearer may seem obvious: the speaker should generate the clue that is associated most strongly with the password, and the hearer should guess the word that is associated most strongly with the clue. Note, however, that word associations are asymmetric. Given a pair of words such as ?shovel? and ?snow?, the forward association (shovel ? snow) may be strong but the backward association (shovel ? snow) may be weak. The third example in Figure 1 shows a case where communication fails because the speaker chooses a clue with a strong forward association but a weak backward association. Although the data include examples like the case just described, we hypothesize that speakers and hearers are both considerate: in other words, that both parties attempt to take the other?s perspective into account. We test this hypothesis by exploring whether speakers and hearers tend to take backward associations into account when generating their clues and guesses. Our second hypothesis is that speaker and hearer are calibrated: in other words, that both make accurate assumptions about the strategy used by the other. Taking the other person?s perspective into account is a good start, but is no guarantee of calibration. Suppose, for example, that the speaker attempts to make the hearer?s task as easy as possible, and considers only backward associations when choosing his clue. This strategy will work best if the hearer considers only forward associates of the clue, but suppose that the hearer considers only backward associations, on the theory that the speaker probably generated his clue by choosing a forward associate. In this case, both parties are considerate but not calibrated, and communication is unlikely to prove successful. 2 Our third hypothesis is that speakers and hearers are collaborative: in other words, that they settle on strategies that tend to share the cognitive burden of communication. In operationalizing this hypothesis we assume that forward associates are easier for people to generate than backward associates. A pair of strategies can be calibrated but not cooperative: for example, the speaker and hearer will be calibrated if both agree that the speaker will consider only forward associates, and the hearer will consider only backward associates. This policy, however, is likely to demand more effort from the hearer than the speaker, and we propose that speakers and hearers will satisfy the principle of least collaborative effort [17, 18] by choosing a calibrated pair of strategies where each person weights forward and backward associates equally. To evaluate our hypotheses we use word association data to analyze the choices made by game show contestants. We first present evidence that speakers and hearers are considerate and take both forward and backward associations into account. We then develop simple models of the speaker and hearer, and use these models to explore the extent to which speakers and hearers weight forward and backward associations. Our results suggest that speakers and hearers are both calibrated and collaborative under normal conditions, but that calibration and collaboration tend to break down under time pressure. 2 Game show and word association data We collected data from the Password game show hosted by Allen Ludden on CBS. Previous researchers have used game show data to explore several aspects of human decision-making [19], but to our knowledge the game of Password has not been previously studied. In each game round, a single English word (the password) is shown to speakers on two competing teams. With each team taking turns, the speaker gives a one-word clue to the hearer and the hearer makes a one-word guess in return. The team that performs best proceeds to the lightning rounds where the same game is played under time pressure. Our data set includes passwords, speaker-generated clues and hearergenerated guesses for 100 normal and 100 lightning rounds sampled from the show episodes during 1962?1967. Each round includes a single password and potentially multiple clues and guesses from both teams. For all our our analyses, we use only the first clue?guess pair in each round. The responses of speakers and hearers are likely to depend heavily on word associations, and we can therefore use word association data to model both speakers and hearers. We used the word association database from the University of South Florida (USF) for all of our analyses [20]. These data were collected using a free association task, where participants were given a cue word and asked to generate a single associate of the cue. More than 6000 participants contributed to the database, and each generated associates for 100?120 English words. To allow for weak associates that were not generated by these participants, we added a count of 1 to the observed frequency for each cuetarget pair in the database. The forward strength (wi ? wj ) is defined as the proportion of wi trials where wj was generated as an associate. The backward strength (wi ? wj ) is proportional to the forward strength (wj ? wi ) but is normalized with respect to all forward strengths to wi : (wj ? wi ) . (wi ? wj ) = P k (wk ? wi ) (1) Note that this normalization ensures that both forward and backward strengths can be treated as probabilities. The correlation between forward strengths and backward strengths is positive but low (r = 0.32), suggesting that our game show analyses may be able to differentiate the influence of forward and backward associations. The USF database includes associates for a set of 5016 words, and we used this set as the lexicon for all of our analyses. Some of the rounds in our game show data include passwords, clues or guesses that do not appear in this lexicon, and we removed these rounds, leaving 68 password-clue and 68 clue-guess pairs in the normal rounds and 86 password-clue pairs and 80 clue-guess pairs in the lightning rounds. The USF database also includes the frequency of each word in a standard corpus of written English [21], and we use these frequencies in our first analysis. 3 log normalized rank a) i) SN b) HN mean ?6 ?6 HL SL i) ?6 ?6 ?8 ?8 ?8 ?8 ?10 ?10 ?10 ?10 ?12 ?12 ?12 ?12 Sf Sb log backward rank Hf Hb SN ii) 5 Sf + Sb Hf + Hb Sf HN 5 r = 0.32 Sb r = 0.70 Hf r = 0.60 4 4 4 3 3 3 3 2 2 2 2 1 1 1 1 0 0 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 Hf + Hb HL 5 4 0 Hb SL ii) 5 Sf + Sb 3 4 5 0 r = 0.45 0 1 2 3 4 5 log forward rank normalized count iii) SN HN iii) SL HL 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 0 0 Bb EbWb Cf Bf Ef Wf Cb Bb EbWb Cf Bf Ef Wf Cb Bb EbWb Cf Bf Ef Wf Cb 0 Bb EbWb Cf Bf Ef Wf Cb Figure 2: (a) Analyses of the speaker and hearer data (SN and HN ) from the normal rounds. (i) Ranks of the human responses normalized with respect to all other words in the lexicon. Ranks are shown along three dimensions: forward strength (f ), backward strength (b) and combined forward and backward strengths. The dark square shows the mean rank, and the horizontal lines within the box show the median and interquartile range. The plus symbols are outliers. (ii) Ranks of the human responses along the forward and backward dimensions. (iii) ?Matched rank? analysis exploring whether human responses tend to be better along one of the dimensions than alternatives that are matched along the other dimension. The four bars on the left in each subplot show normalized counts based on comparisons with matches along the f dimension, and the four bars on the right are based on matches along the b dimension. For example, group Bb includes human responses that are better along the b dimension compared to matches along the f dimension, and groups Eb and W b include cases where human responses are equal to or worse than the f -matches. Group Cf includes cases where the human response is top ranked along the f dimension. Groups Bf , Ef , W f and Cb are defined similarly. (b) Analyses of the lightning rounds. 3 Speakers and hearers are considerate A speaker should find it easy to generate clues that are strong forward associates of a password, and a hearer should likewise find it easy to generate guesses that are strong forward associates of a clue. A considerate speaker, however, may attempt to generate strong backward associates, which will make it easier for the hearer to successfully guess the password. Similarly, a hearer who considers the task faced by the speaker should also take backward associates into account. This section describes some initial analyses that explore whether clues and guesses are shaped by backward associations. Figure 2a.i compares forward and backward strengths as predictors of the responses chosen by speakers and hearers. A dimension is a successful predictor if the words chosen by contestants tend to have low ranks along this dimension with respect to the 5016 words in the lexicon (rank 1 is the top rank). We handle ties using fractional ranking, which means that it is sensible to compare mean ranks along each dimension. In Figure 2a.i, Sf and Sb represent forward (password ? clue) and backward (password ? clue) strengths for the speaker, and Hf and Hb represent forward (clue ? guess) and 4 backward (clue ? guess) strengths for the hearer. In addition to forward and backward strengths, we also considered word frequency as a predictor. Across both normal (SN and HN ) and lightning (SL and HL ) rounds, the ranks along the forward and backward dimensions are substantially better than ranks along the frequency dimension (p < 0.01 in pairwise t-tests), and we therefore focus on forward and backward strengths for the rest of our analyses. For data set SN the mean ranks suggest that forward and backward strengths appear to predict choices about equally well. The third dimension Sf + Sb is created by combining dimensions Sf and Sb . Word w1 dominates w2 if it is superior along one dimension and no worse along the other, and the rank for each word along the combined dimension is based on the number of words that dominate it. For data set SN , the mean rank based on the Sf + Sb dimension is lower than that for Sf alone, suggesting that backward strengths make a predictive contribution that goes beyond the information present in the forward associations. Note, however, that the difference between mean ranks for Sf and Sf + Sb is not statistically significant. For data set HN , Figure 2a.i provides little evidence that backward strengths make a contribution that goes beyond the forward strengths. Figure 2a.ii plots the rank of each guess along the dimensions of forward and backward strength. The correlation between the dimensions is relatively high, suggesting that both dimensions tend to capture the information present in the other. As a result, the hearer data set HN may offer little opportunity to explore whether backward and forward associations both contribute to people?s responses. Figure 2a.iii shows the results of an analysis that explores more directly whether each dimension makes a contribution that goes beyond the other. We compared each ?actual word? (i.e. each clue or guess chosen by a contestant) to ?matched words? that are matched in rank along one of the dimensions. For example, if the backward dimension matters, then the actual words should tend to be better along the b dimension than words that are matched along the f dimension. The first group of bars in Figure 2a.iii shows the proportion of actual words that are better (Bb), equivalent (Eb) or worse (W b) along the backward dimension than matches along the forward dimension. The Bb bar is higher than the others, suggesting that the backward dimension does indeed make a contribution that goes beyond the forward dimension. Note that a match is defined as a word that is ranked the same as the actual word, or in cases where there are no ties, a word that is ranked one step better. The fourth bar (Cf , for champion along the forward dimension) includes all cases where a word is ranked best along the forward dimension, which means that no match can be found. Our policy for identifying matches is conservative?all other things being equal, actual words should be equivalent (Eb) or worse (W b) than the matched words, which means that the large Bb bar provides strong evidence that the backward dimension is important. A binomial test confirms that the Bb bar is significantly greater than the W b bar (p < 0.05). The Bf bar for the speaker data is also high, suggesting that the forward dimension makes a contribution that goes beyond the backward dimension. In other words, Figure 2a.iii suggests that both dimensions influence the responses of the speaker. The results for the hearer data HN provide additional support for the idea that neither dimension predicts hearer guesses better than the other. Note, for example, that the second group of four bars in Figure 2a.iii suggests that the forward dimension is not predictive once the backward dimension is taken into account (Bf is smaller than W f ). This result is consistent with our previous finding that forward and backward strengths are highly correlated in the case of the hearer, and that neither dimension makes a contribution after controlling for the other. Our analyses so far suggest that forward and backward strengths both make independent contributions to the choices made by speakers, but that the hearer data do not allow us to discriminate between these dimensions. Figure 2b shows similar analyses for the lightning rounds. The most notable change is that backward strengths appear to play a much smaller role when speakers are placed under time pressure. For example, Figure 2b.i suggests that backward strengths are now worse than forward strengths at predicting the clues chosen by speakers. Relative to the results for the normal rounds SN , the Bb counts for SL in Figure 2b.iii show a substantial drop (53% decrease) and the Bf counts show an increase of similar scale. ?2 goodness-of-fit tests show that the distributions of counts for both {Bb, Eb, W b, Cf } and {Bf, Ef, W f, Cb} in the lightning rounds significantly deviate from those in the normal rounds (p < 0.01). This result provides further evidence that speakers tend to rely more heavily on forward associations than backward associations when placed under time pressure. 5 S0 S1 S2 Speaker distribution pS (c|w) (w ? c) (w ? c) (2) (2) ?S (w ? c) + ?S (w ? c) .. . Sn ?S (w ? c) + ?S (w ? c) (n) (n) H0 H1 H2 Hearer distribution pH (w|c) (c ? w) (c ? w) (2) (2) ?H (c ? w) + ?H (c ? w) .. . Hn ?H (c ? w) + ?H (c ? w) (n) (n) Table 1: Strategies for speaker and hearer. In each case we assume that the speaker and hearer sample words from distributions pS (c|w) and pH (w|c) based on the expressions shown. At level 0, both speaker and hearer rely entirely on forward associates, and at level 1, both parties rely entirely on backward associates. For each party, the strategy at level k is the best choice assuming that the other person uses a strategy at a level lower than k. Our previous analyses found little evidence that forward and backward strengths make separate contributions in the case of the hearer, but the lightning data HL suggest that these dimensions may indeed make separate contributions. Figure 2b.iii suggests that time pressure affects these dimensions differently: note that Bb counts decrease by 19% and Bf counts increase by 64%. ?2 tests confirm that the distributions of {Bb, Eb, W b, Cf} and {Bf, Ef, W f, Cb} in the lightning rounds significantly deviate from those in the normal rounds (p < 0.01), suggesting that the hearer (like the speaker) tends to rely on forward strengths rather than backward strengths in the lightning rounds. Taken together, the full set of results in Figure 2 suggests that the responses of speakers and hearers are both shaped by backward associates?in other words, that both parties are considerate of the other person?s situation. The evidence in the case of the speaker is relatively strong and all of the analyses we considered suggest that backward associations play a role. The evidence is weaker in the case of the hearer, and only the comparison between normal and lightning rounds suggests that backward associations play some role. 4 Efficient communication: calibration and collaboration Our analyses so far provide some initial evidence that speakers and hearers are both influenced by forward and backward associations. Given this result, we now consider a model that explores how forward and backward associations are combined in generating a response. 4.1 Speaker and hearer models Since both kinds of associations appear to play a role, we explore a simple speaker model which assumes that the clue c chosen for the password w is sampled from a mixture distribution pS (c|w) = ?S (w ? c) + ?S (w ? c) (2) where (w ? c) indicates the forward strength from w to c, (w ? c) indicates the backward strength from c to w, and ?S and ?S are mixture weights that sum to 1. The corresponding hearer model assumes that guess w given clue c is sampled from the mixture distribution pH (w|c) = ?H (c ? w) + ?H (c ? w). (3) Several possible mixture distributions for speaker and hearer are shown in Table 1. For example, the level 0 distributions assume that speaker and hearer both rely entirely on forward associates, and the level 1 distributions assume that both rely entirely on backward associates. By fitting mixture weights to the game show data we can explore the extent to which speaker and hearer rely on forward and backward associations. The mixture models in Equations 2 and 3 can be derived by assuming that the hearer relies on Bayesian inference. Using Bayes? rule, the hearer distribution pH (w|c) can be expressed as pH (w|c) ? pS (c|w)p(w). 6 (4) To simplify our analysis we make three assumptions. First, we assume that the prior p(w) in Equation 4 is uniform. Second, we assume that contestants are near-optimal in many respects but that they sample rather than maximize. In other words, we assume that the hearer samples a guess w from the distribution pH (w|c) in Equation 4, and that the speaker samples a clue from a distribution pS (c|w) ? pH (w|c). Finally, we assume that the normalizing constant in Equation 1 is 1 for all words wi . This assumption seems reasonable since for our smoothed data set the mean value of the normalizing constant is 1 and the standard deviation is 0.04. Our final assumption simplifies matters considerably since it implies that (wi ? wj ) = (wj ? wi ) for all pairs wi and wj . Given these assumptions it is straightforward to show that the level 0 strategies in Table 1 are the best responses to the level 1 strategies, and vice versa. For example, if the speaker uses strategy S0 and samples a clue c from the distribution pS (c|w) = w ? c, then Equation 4 suggests that the hearer should sample a guess w from the distribution pH (c|w) ? (w ? c) = (c ? w). Similarly, if the speaker uses the strategy S1 and samples a clue c from the distribution pS (c|w) = (w ? c), then Equation 4 suggests that the hearer should sample a guess w from the distribution pH (c|w) ? (w ? c) = (c ? w). Suppose now that the hearer is uncertain about the strategy used by the speaker. A level 2 hearer (2) assumes that the speaker could use strategy S0 or strategy S1 and assigns prior probabilities of ?H (2) and ?H to these speaker strategies. Since H1 is the appropriate response to S0 and H0 is the appropriate response to S1 , the level 2 hearer should sample from the distribution pH (w|c) = p(S1 )pH (w|c, S1 ) + p(S0 )pH (w|c, S0 ) = ?H (c ? w) + ?H (c ? w). (2) (2) (5) More generally, suppose that a level n hearer assumes that the speaker uses a strategy from the set {S0 , S1 , . . . , Sn?1 }. Since the appropriate response to any one of these strategies is a mixture similar to Equation 5, it follows that strategy Hn is also a mixture of the distributions (w ? c) and (w ? c). A similar result holds for the speaker, and strategy Sn in Table 1 also takes the form of a mixture distribution. Our Bayesian analysis therefore suggests that efficient speakers and hearers can be characterized by the mixture models in Equations 2 and 3. Some pairs of mixture models are calibrated in the sense that the hearer model is the best choice given the speaker model and vice versa. Equation 4 implies that calibration is achieved when the forward weight for the speaker matches the backward weight for the hearer (?S = ?H ) and the backward weight for the speaker matches the forward weight for the hearer (?S = ?H ). If game show contestants achieve efficient communication, then mixture weights fit to their responses should come close to satisfying this calibration condition. There are many sets of weights that satisfy the calibration condition. For example, calibration is achieved if the speaker uses strategy S0 and the hearer uses strategy H1 . If generating backward associates is more difficult than thinking about forward associates, this solution seems unbalanced since the hearer alone is required to think about backward associates. Consistent with the principle of least collaborative effort, we make a second prediction that speaker and hearer will collaborate and share the communicative burden equally. More precisely, we predict that both parties will assign the same weight to backward associates and that ?S will equal ?H . Combining our two predictions, we expect that the weights which best characterize human responses will have ?S = ?S = ?H = ?H = 0.5. 4.2 Fitting forward and backward mixture weights to the data To evaluate our predictions we assumed that the speaker and hearer are characterized by Equations 2 and 3 and identified the mixture weights that best fit the game show data. Assuming that the M game rounds are independent, the log likelihood for the speaker data is L = log M Y m=1 P (cm |wm ) = M X [?S log(wm ? cm ) + ?S log(wm ? cm )] (6) m=1 and a similar expression is used for the hearer data. We fit the weights ?S and ?S by maximizing the log likelihood in Equation 6. Since this likelihood term is convex and there is a single free parameter (?S +?S = 1), the global optimum can be found by a simple line search over the range 0 < ?S < 1. 7 0.8 b) c) 4 response time (sec) ? ? 3 log( ? ?) mixture weights a) 1 0.6 0.4 2 1 0 0.2 ?1 ?2 0 S N H N S L H L S H N N S L 10 normal lightning 5 0 S H H L Figure 3: (a) Fitted mixture weights for the speaker (S) and hearer (H) models based on bootstrapped normal (N) and lightning (L) rounds. ? and ? are weights on the forward and backward strengths. (b) Log-ratios of ? and ? weights estimated from bootstrapped normal and lightning rounds. (c) Average response times for speakers choosing clues and hearers choosing guesses in normal and lightning rounds. Averages are computed over 30 rounds randomly sampled from the game show. We ran separate analyses for normal and lightning rounds, and ran similar analyses for the hearer data. 1000 estimates of each mixture weight were computed by bootstrapping game show rounds while keeping tallies of normal and lightning rounds constant. Consistent with our predictions, the results in Figure 3a suggest that all four mixure weights for the normal rounds are relatively close to 0.5. Both speaker and hearer appear to weight forward associates slightly more heavily than backward associates, but 0.5 is within one standard deviation of the bootstrapped estimates in all four cases. The lightning rounds produce a different pattern of results and suggest that the speaker now relies much more heavily on forward than backward associates. Figure 3b shows log ratios of the mixture weights, and indicates that these ratios lie close to 0 (i.e. ? = ?) in all cases except for the speaker in the lightning rounds. Further confidence tests show that the percentage of bootstrapped ratios exceeding 0 is 100% for the speaker in the lightning rounds, but 85% or lower in the three remaining cases. Consistent with our previous analyses, this result suggests that coordinating with the hearer requires some effort on the part of the speaker, and that this coordination is likely to break down under time pressure. The fitted mixture weights, however, do not confirm the prediction that time pressure makes it difficult for the hearer to consider backward associations. Figure 3c helps to explain why mixture weights for the speaker but not the hearer may differ across normal and lightning rounds. The difference in response times between normal and lightning rounds is substantially greater for the speaker than the hearer, suggesting that any differences between normal and lightning rounds are more likely to emerge for the speaker than the hearer. 5 Conclusion We studied how speakers and hearers communicate in a very simple context. Our results suggest that both parties take the other person?s perspective into account, that both parties make accurate assumptions about the strategy used by the other, and that the burden of communication is equally divided between the two. All of these conclusions support the idea that human communication is relatively efficient. Our results, however, suggest that efficient communication is not trivial to achieve, and tends to break down when speakers are placed under time pressure. Although we worked with simple models of the speaker and hearer, note that neither model is intended to capture psychological processing. Future studies can explore how our models might be implemented by psychologically plausible mechanisms. For example, one possibility is that speakers sample a small set of words with high forward strengths, then choose the word in this sample with greatest backward strength. Different processing models might be considered, but we believe that any successful model of speaker or hearer will need to include some role for inferences about the other person. Acknowledgments This work was supported in part by the Richard King Mellon Foundation (YX) and by NSF grant CDI-0835797 (CK). 8 References [1] L. Horn. Toward a new taxonomy for pragmatic inference: Q-based and R-based implicature. In Meaning, Form, and Use in Context: Linguistic Applications. Georgetown University Press, 1984. [2] P. Grice. Studies in the Way of Words. Harvard University Press, Cambridge, 1989. [3] D. Sperber. Relevance: Communication and Cognition. Blackwell, Oxford, 1986. [4] S. Levinson. Presumptive Meanings: The Theory of Generalized Implicature. MIT Press, Cambridge, 2000. [5] D. Jurafsky. Pragmatics and computational linguistics. In L. R. Horn and G. Ward, editors, Handbook of Pragmatics, pages 578?604. Blackwell, Oxford, 2005. [6] G. K. Zipf, editor. Human behaviour and the principle of least effort: An introduction to human ecology. Addison-Wesley Press, Cambridge, 1949. [7] R. Levy and T. F. Jaeger. Speakers optimize information density through syntactic reduction. In Advances in Neural Information Processing Systems, 2007. [8] T. F. Jaeger. Redundancy and reduction: Speakers manage syntactic information density. Cognitive Psychology, 61(1):23?62, 2010. [9] M. Aylett and A. Turk. The smooth signal redundancy hypothesis: A functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Language and Speech, 47(1):31?56, 2004. [10] S. T. Piantadosi, H. J. Tily, and E. Gibson. The communicative lexicon hypothesis. In The 31st annual meeting of the Cognitive Science Society, 2009. [11] R. Baddeley and D. Attewell. The relationship between language and the environment: information theory shows why we have only three lightness terms. Psychological Science, 20(9):1100?1107, 2009. [12] J. Hawkins. Efficiency and complexity in grammars. Oxford University Press, Oxford, 2004. [13] N. Chomsky. Language and mind: current thoughts on ancient problems. In L. Jenkins, editor, Variations and universals in biolinguistics, pages 379?405. Elsevier, Amsterdam, 2004. [14] R. van Rooy. Conversational implicatures and communication theory. In J. van Kuppevelt and R. Smith, editors, Current and New Directions in Discourse and Dialogue. Kluwer, 2003. [15] C. R. M. McKenzie and J. D. Nelson. What a speaker?s choice of frame reveals: reference points, frame selection, and framing effects. Psychonomic Bullentin and Review, 10, 2003. [16] S. Sher and C. R. M. McKenzie. Information leakage from logically equivalent frames. Cognition, 101:467?494, 2006. [17] H. H. Clark and D. Wilkes-Gibbs. Referring as a collaborative process. Cognition, 22:1?39, 1986. [18] H. H. Clark. Using language. Cambridge University Press, Cambridge, 1996. [19] J. B. Berk, E. Hughson, and K. Vandezande. The price is right, but are the bids? An investigation of rational decision theory. The American Economic Review, 86(4):654?970, 1996. [20] D. L. Nelson, C. L. McEvoy, and T. A. Schreiber. The University of South Florida word association, rhyme, and word fragment norms. http://www.usf.edu/FreeAssociation/, 1998. [21] H. Kucera and W. N. Francis. Computational Analysis of Present-day American Engish. Brown University Press, Providence, 1967. 9
3951 |@word trial:1 norm:1 proportion:2 seems:2 bf:11 confirms:1 prominence:1 pressure:10 pick:1 reduction:2 initial:2 fragment:1 bootstrapped:4 current:2 must:3 written:1 hypothesize:1 plot:4 drop:1 alone:2 cue:2 leaf:1 guess:31 mcevoy:1 smith:1 short:1 provides:3 math:1 contribute:2 lexicon:5 simpler:1 along:24 prove:1 fitting:2 pairwise:1 secret:2 indeed:2 examine:1 growing:1 little:3 actual:5 piantadosi:1 tily:1 becomes:1 begin:1 matched:6 what:1 kind:1 cm:3 substantially:3 clothes:1 finding:1 bootstrapping:1 guarantee:1 tie:2 grant:1 appear:5 positive:1 tends:3 oxford:4 rhyme:1 might:4 plus:1 eb:5 studied:3 suggests:10 challenging:1 jurafsky:1 range:2 statistically:1 acknowledgment:1 horn:2 cold:1 universal:1 gibson:1 significantly:3 thought:1 inferential:1 word:54 confidence:1 suggest:9 chomsky:1 needle:1 close:3 bend:1 selection:1 context:2 influence:2 optimize:1 equivalent:3 www:1 maximizing:1 go:5 straightforward:1 duration:1 rabbit:1 convex:1 identifying:1 assigns:1 rule:1 dominate:1 his:2 handle:1 variation:1 controlling:1 suppose:5 heavily:4 rip:1 play:4 spontaneous:1 us:7 hypothesis:8 associate:28 harvard:1 satisfying:1 asymmetric:1 predicts:1 database:6 cooperative:1 bottom:2 observed:1 role:5 capture:2 mend:3 wj:9 ensures:1 grice:1 episode:1 decrease:2 removed:1 ran:2 substantial:1 environment:1 complexity:1 asked:1 depend:1 predictive:2 division:1 efficiency:2 cdi:1 differently:2 fabric:1 herself:1 prosodic:1 choosing:6 h0:2 widely:1 plausible:1 say:1 grammar:1 ward:1 think:2 syntactic:2 final:2 obviously:1 differentiate:1 propose:1 combining:2 rapidly:1 achieve:2 everyday:1 sew:3 p:7 optimum:1 jaeger:2 produce:1 generating:3 help:1 develop:2 school:1 strong:7 implemented:1 c:1 quotient:1 implies:2 come:1 differ:1 direction:1 snow:5 human:13 settle:1 shovel:5 assign:1 behaviour:1 fix:1 investigation:1 exploring:2 hold:1 around:1 considered:3 ground:1 normal:21 hawkins:1 cb:7 cognition:3 predict:5 label:1 communicative:2 coordination:1 schreiber:1 champion:1 vice:2 successfully:1 tool:1 mit:1 sight:1 rather:2 ck:1 spoon:1 password:28 linguistic:2 derived:1 focus:1 she:2 rank:20 indicates:3 likelihood:3 logically:1 wf:4 sense:1 elsevier:1 inference:13 sb:12 unlikely:1 her:2 reproduce:1 constrained:1 equal:3 once:1 shaped:2 look:1 thinking:2 future:1 others:1 simplify:1 richard:1 randomly:1 intended:1 attempt:3 ecology:1 highly:1 interquartile:1 multiply:2 possibility:1 mixture:20 accurate:4 divide:3 ancient:1 uncertain:1 fitted:2 psychological:2 column:2 compelling:1 goodness:1 deviation:2 predictor:3 uniform:1 successful:4 characterize:1 providence:1 considerably:1 calibrated:8 chooses:1 person:6 combined:3 explores:2 density:2 st:1 referring:1 together:1 w1:1 manage:1 hearer:94 choose:2 hn:10 worse:5 cognitive:4 american:2 dialogue:1 return:1 account:9 potential:2 suggesting:7 sec:1 wk:1 includes:7 matter:2 satisfy:2 notable:1 ranking:1 h1:3 break:5 responsibility:1 linked:1 francis:1 analyze:1 start:1 hf:9 sort:1 participant:3 bayes:1 wm:3 collaborative:6 contribution:9 square:1 freeassociation:1 who:2 likewise:1 weak:3 bayesian:2 researcher:1 dig:1 explain:1 influenced:1 frequency:5 turk:1 regress:1 obvious:1 associated:3 sampled:4 rational:1 bridged:1 knowledge:1 fractional:1 organized:1 operationalizing:1 wesley:1 higher:1 day:1 seam:1 response:22 maximally:2 box:1 strongly:2 just:1 correlation:2 working:1 horizontal:1 considerate:7 indicated:1 gray:1 believe:1 effect:1 concept:1 normalized:5 brown:1 white:1 round:38 game:23 during:1 speaker:98 generalized:1 theoretic:1 demonstrate:1 performs:1 allen:1 meaning:2 dirt:1 consideration:1 ef:7 charles:1 superior:1 functional:1 psychonomic:1 association:31 kluwer:1 mellon:2 expressing:1 significant:1 versa:2 cambridge:5 gibbs:1 zipf:1 collaborate:1 similarly:3 language:4 lightning:24 dot:1 calibration:7 add:1 cbs:1 perspective:5 phonetic:1 meeting:1 greater:2 additional:1 subplot:1 maximize:1 signal:1 ii:4 levinson:2 multiple:1 full:1 infer:3 smooth:1 match:10 characterized:2 offer:1 divided:1 equally:5 prevented:1 prediction:7 circumstance:2 cmu:2 pwd:3 represent:3 normalization:1 psychologically:1 achieved:2 addition:2 median:1 leaving:1 w2:1 rest:1 probably:1 south:2 tend:9 thing:1 seem:1 near:1 yang:1 split:1 easy:3 iii:9 hb:8 bid:1 affect:1 fit:4 psychology:2 implicatures:1 competing:1 identified:1 economic:1 idea:2 simplifies:1 thread:1 whether:5 expression:2 effort:5 speech:2 speaking:1 linguist:1 generally:1 dark:1 ph:12 generate:6 http:1 supplied:1 sl:5 percentage:1 nsf:1 estimated:1 coordinating:1 carnegie:1 group:6 implicature:2 four:5 redundancy:3 achieving:1 drawn:1 neither:3 backward:69 sum:1 kucera:1 fourth:1 communicate:2 saying:1 reasonable:1 decision:2 entirely:4 ditch:1 played:1 encountered:1 annual:1 strength:39 constraint:1 precisely:1 worked:1 aspect:1 conversational:1 relatively:5 department:2 ball:1 flake:1 describes:1 across:2 smaller:2 intimately:1 slightly:1 wi:12 making:1 s1:7 hl:5 psychologist:1 heal:1 outlier:1 taken:2 equation:11 agree:1 previously:1 pin:1 turn:1 count:8 mechanism:1 mind:2 addison:1 studying:1 jenkins:1 permit:1 appropriate:3 mckenzie:2 alternative:1 slower:1 florida:3 top:4 remaining:2 include:4 cf:8 binomial:1 assumes:4 opportunity:1 linguistics:1 yx:1 especially:1 conquer:1 society:1 leakage:1 added:1 strategy:25 spade:1 southern:1 separate:4 aylett:1 sensible:1 me:1 nelson:2 considers:4 kemp:1 extent:2 collected:2 pet:3 trivial:1 toward:1 assuming:3 relationship:2 providing:1 ratio:4 difficult:2 potentially:1 taxonomy:1 ski:1 policy:2 contributed:1 situation:1 communication:24 team:4 frame:3 smoothed:1 dog:3 pair:10 required:1 blackwell:2 framing:1 yx1:1 able:1 bar:10 proceeds:1 beyond:5 pattern:1 articulation:1 explanation:2 greatest:1 treated:1 ranked:4 restore:1 predicting:1 rely:7 lightness:1 created:1 presumptive:1 sher:1 utterance:3 sn:11 joan:3 faced:1 circled:2 deviate:2 prior:2 georgetown:1 review:2 relative:1 expect:1 proportional:1 clark:2 h2:1 foundation:1 consistent:4 s0:8 principle:3 editor:4 ckemp:1 share:3 pant:1 collaboration:2 row:4 placed:5 supported:1 free:2 english:3 keeping:1 formal:2 allow:3 weaker:1 fall:1 taking:2 emerge:1 van:2 dimension:44 world:1 usf:4 forward:62 made:3 clue:49 party:9 far:2 bb:13 confirm:2 global:1 reveals:1 handbook:1 corpus:1 assumed:1 rooy:1 search:1 why:2 table:4 did:1 arrow:1 s2:1 convey:1 xu:1 body:1 hosted:1 fails:1 tally:1 exceeding:1 sf:15 lie:1 levy:1 third:3 down:4 symbol:1 evidence:9 dominates:1 burden:4 normalizing:2 television:3 demand:1 gap:1 easier:2 subtract:2 explore:9 likely:4 expressed:2 stitch:3 amsterdam:1 satisfies:1 relies:2 discourse:1 king:1 price:1 change:1 included:1 except:1 berk:1 conservative:1 called:1 discriminate:1 pragmatic:3 support:3 people:2 unbalanced:1 relevance:1 evaluate:3 contestant:5 baddeley:1 correlated:1
3,259
3,952
Predictive State Temporal Difference Learning Geoffrey J. Gordon Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Byron Boots Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Abstract We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications, reinforcement learning (RL) is complicated by the fact that state is either high-dimensional or partially observable. Therefore, RL methods are designed to work with features of state rather than state itself, and the success or failure of learning is often determined by the suitability of the selected features. By comparison, subspace identification (SSID) methods are designed to select a feature set which preserves as much information as possible about state. In this paper we connect the two approaches, looking at the problem of reinforcement learning with a large set of features, each of which may only be marginally useful for value function approximation. We introduce a new algorithm for this situation, called Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive state representations, PSTD finds a linear compression operator that projects a large set of features down to a small set that preserves the maximum amount of predictive information. As in RL, PSTD then uses a Bellman recursion to estimate a value function. We discuss the connection between PSTD and prior approaches in RL and SSID. We prove that PSTD is statistically consistent, perform several experiments that illustrate its properties, and demonstrate its potential on a difficult optimal stopping problem. 1 Introduction We wish to estimate the value function of a policy in an unknown decision process in a high dimensional and partially-observable environment. We represent the value function in a linear architecture, as a linear combination of features of (sequences of) observations. A popular family of learning algorithms called temporal difference (TD) methods [1] are designed for this situation. In particular, least-squares TD (LSTD) algorithms [2, 3, 4] exploit the linearity of the value function to estimate its parameters from sampled trajectories, i.e., from sequences of feature vectors of visited states, by solving a set of linear equations. Recently, Parr et al. looked at the problem of value function estimation from the perspective of both model-free and model-based reinforcement learning [5]. The model-free approach (which includes TD methods) estimates a value function directly from sample trajectories. The model-based approach, by contrast, first learns a model of the process and then computes the value function from the learned model. Parr et al. demonstrated that these two approaches compute exactly the same value function [5]. In the current paper, we build on this insight, while simultaneously finding a compact set of features using powerful methods from system identification. First, we look at the problem of improving LSTD from a model-free predictive-bottleneck perspective: given a large set of features of history, we devise a new TD method called Predictive State Temporal Difference (PSTD) learning. PSTD estimates the value function through a bottleneck that 1 preserves only predictive information (Section 3). Second, we look at the problem of value function estimation from a model-based perspective (Section 4). Instead of learning a linear transition model in feature space, as in [5], we use subspace identification [6, 7] to learn a PSR from our samples. Since PSRs are at least as compact as POMDPs, our representation can naturally be viewed as a value-directed compression of a much larger POMDP. Finally, we show that our two improved methods are equivalent. This result yields some appealing theoretical benefits: for example, PSTD features can be explicitly interpreted as a statistically consistent estimate of the true underlying system state. And, the feasibility of finding the true value function can be shown to depend on the linear dimension of the dynamical system, or equivalently, the dimensionality of the predictive state representation?not on the cardinality of the POMDP state space. Therefore our representation is naturally ?compressed? in the sense of [8], speeding up convergence. We demonstrate the practical benefits of our method with several experiments: we compare PSTD to competing algorithms on a synthetic example and a difficult optimal stopping problem. In the latter problem, a significant amount of prior work has gone into hand-tuning features. We show that, if we add a large number of weakly relevant features to these hand-tuned features, PSTD can find a predictive subspace which performs much better than competing approaches, improving on the best previously reported result for this problem by a substantial margin. The theoretical and empirical results reported here suggest that, for many applications where LSTD is used to compute a value function, PSTD can be simply substituted to produce better results. 2 Value Function Approximation We start from a discrete time dynamical system with a set of states S, a set of actions A, a distribution over initial states ?0 , a transition function T , a reward function R, and a discount factor ? ? [0, 1]. We seek a policy ?, a mapping from states to actions. For a given policy ?, the value of state s is defined as the discounted sum of rewards when starting in state s and following policy ?, Pexpected ? J ? (s) = E [ t=0 ? t R(st ) | s0 = s, ?]. The value function obeys the Bellman equation P J ? (s) = R(s) + ? s0 J ? (s0 ) Pr[s0 | s, ?(s)] (1) If we know the transition function T , and if the set of states S is sufficiently small, we can find an optimal policy with policy iteration: pick an initial policy ?, use (1) to solve for the value function J ? , compute the greedy policy for J ? (setting the action at each state to maximize the right-hand side of (1)), and repeat. However, we consider instead the harder problem of estimating the value function when s is a partially observable latent variable, and when the transition function T is unknown. In this situation, we receive information about s through observations from a finite set O. We can no longer make decisions or predict reward based on S, but instead must use a history (an ordered sequence of action-observation pairs h = ah1 oh1 . . . aht oht that have been executed and observed prior to time t): R(h), J(h), and ?(h) instead of R(s), J ? (s), and ?(s). Let H be the set of all possible histories. H is often very large or infinite, so instead of finding a value separately for each history, we focus on value functions that are linear in features of histories J ? (h) = wT ?H (h) (2) Here w ? Rj is a parameter vector and ?H (h) ? Rj is a feature vector for a history h. So, we can rewrite the Bellman equation as P wT ?H (h) = R(h) + ? o?O wT ?H (h?o) Pr[h?o | h?] (3) where h?o is history h extended by taking action ?(h) and observing o. 2.1 Least Squares Temporal Difference Learning In general we don?t know the transition probabilities Pr[h?o | h], but we do have samples of state H H H features ?H t = ? (ht ), next-state features ?t+1 = ? (ht+1 ), and immediate rewards Rt = R(ht ). We can thus estimate the Bellman equation T H w T ?H 1:k ? R1:k + ?w ?2:k+1 (4) H (Here we have used ?H 1:k to mean the matrix whose columns are ?t for t = 1 . . . k.) We can can immediately attempt to estimate the parameter w by solving this linear system in the least squares 2 ? H ? sense: w ? T = R1:k ?H 1:k ? ??2:k+1 , where indicates the pseudo-inverse. However, this solution H is biased [3], since the independent variables ?H t ? ??t+1 are noisy samples of the expected difP H H ference E[? (h) ? ? o?O ? (h?o) Pr[h?o | h]]. In other words, estimating the value function parameters w is an error-in-variables problem. The least squares temporal difference (LSTD) algorithm finds a consistent estimate of w by rightT multiplying the approximate Bellman equation (Equation 4) by ?H t :  ?1 Pk T 1 Pk ? Pk H HT H HT ? ? ? ? ? w ? T = k1 t=1 Rt ?H (5) t t t t t+1 t=1 t=1 k k T Here, ?H can be viewed as an instrumental variable [3], i.e., a measurement that is correlated with t the true independent variables but uncorrelated with the noise in our estimates of these variables. T H As the amount of data k increases, the empirical covariance matrices ?H 1:k ?1:k /k and T H ?H 2:k+1 ?1:k /k converge with probability 1 to their population values, and so our estimate of the matrix to be inverted in (5) is consistent. So, as long as this matrix is nonsingular, our estimate of the inverse is also consistent, and our estimate of w converges to the true value with probability 1. 3 Predictive Features LSTD provides a consistent estimate of the value function parameters w; but in practice, if the number of features is large relative to the number of training samples, then the LSTD estimate of w is prone to overfitting. This problem can be alleviated by choosing a small set of features that only contains information that is relevant for value function approximation. However, with the exception of LARS-TD [9], there has been little work on how to select features automatically for value function approximation when the system model is unknown; and of course, manual feature selection depends on not-always-available expert guidance. We approach the problem of finding a good set of features from a bottleneck perspective. That is, given a large set of features of history, we would like to find a compression that preserves only relevant information for predicting the value function J ? . As we will see in Section 4, this improvement is directly related to spectral identification of PSRs. 3.1 Finding Predictive Features Through a Bottleneck In order to find a predictive feature compression, we first need to determine what we would like to predict. The most relevant prediction is the value function itself; so, we could simply try to predict total future discounted reward. Unfortunately, total discounted reward has high variance, so unless we have a lot of data, learning will be difficult. We can reduce variance by including other prediction tasks as well. For example, predicting individual rewards at future time steps seems highly relevant, and gives us much more immediate feedback. Similarly, future observations hopefully contain information about future reward, so trying to predict observations can help us predict reward. We call these prediction tasks, collectively, features of the future. We write ?Tt for the vector of all features of the ?future at time t,? i.e., events starting at time t + 1 and continuing forward. Now, instead of remembering a large arbitrary set of features of history, we want to find a small subspace of features of history that is relevant for predicting features of the future. We will call this subspace a predictive compression, and we will write the value function as a linear function of only the predictive compression of features. To find our predictive compression, we will use reducedrank regression [10]. We define the following empirical covariance matrices between features of the future and features of histories: T HT b T ,H = 1 Pk ?Tt ?H b H,H = 1 Pk ?H ? ? (6) t t=1 t=1 t ?t k k b H,H . Then we can find a predictive compression Let LH be the lower triangular Cholesky factor of ? of histories by a singular value decomposition (SVD) of the weighted covariance: write UDV T ? b T ,H L?T for a truncated SVD [11], where U contains the left singular vectors, V contains the right ? H singular vectors, and D is the diagonal matrix of singular values. (We can tune accuracy by keeping more or fewer singular values, i.e., columns of U, V, or D.) We use the SVD to define a mapping b from the compressed space up to the space of features of the future, and we define Vb to be the U 3 b (in a least-squares sense, see [12] for details): optimal compression operator given U b = UD1/2 U b T? b T ,H (? b H,H )?1 Vb = U (7) By weighting different features of the future differently, we can change the approximate compression in interesting ways. For example, as we will see in Section 4.2, scaling up future reward by a constant factor results in a value-directed compression?but, unlike previous ways to find valuedirected compressions [8], we do not need to know a model of our system ahead of time. For b T ,T . another example, let LT be the Cholesky factor of the empirical covariance of future features ? ?T Then, if we scale features of the future by LT , the SVD will preserve the largest possible amount of mutual information between history and future, yielding a canonical correlation analysis [13, 14]. 3.2 Predictive State Temporal Difference Learning Now that we have found a predictive compression operator Vb via Equation 7, we can replace the b H features of history ?H t with the compressed features V ?t in the Bellman recursion, Equation 4: Tb H wT Vb ?H 1:k ? R1:k + ?w V ?2:k+1 (8) The least squares solution for w is still prone to an error-in-variables problem. The instrumental variable ?H is still correlated with the true independent variables and uncorrelated with noise, and so we can again use it to unbias the estimate of w. Define the additional covariance matrices: b R,H = 1 Pk Rt ?H T b H+ ,H = 1 Pk ?H ?H T ? ? (9) t t=1 t=1 t+1 t k k b H,H = ? b R,H + ?wT Vb ? b H+ ,H , and solving for w Then, the corrected Bellman equation is wT Vb ? gives us the Predictive State Temporal Difference (PSTD) learning algorithm:  ? b R,H Vb ? b H,H ? ? Vb ? b H+ ,H wT = ? (10) So far we have provided some intuition for why predictive features should be better than arbitrary features for temporal difference learning. Below we will show an additional benefit: the model-free algorithm in Equation 10 is, under some circumstances, equivalent to a model-based method which uses subspace identification to learn Predictive State Representations [6, 7]. 4 Predictive State Representations A predictive state representation (PSR) [15] is a compact and complete description of a dynamical system. Unlike POMDPs, which represent state as a distribution over a latent variable, PSRs represent state as a set of predictions of tests. Just as a history is an ordered sequence of actionobservation pairs executed prior to time t, we define a test of length i to be an ordered sequence of action-observation pairs ? = a1 o1 . . . ai oi that can be executed and observed after time t [15]. The prediction for a test ? after a history h, written ? (h), is the probability that we will see the test observations ? O = o1 . . . oi , given that we intervene [16] to execute the test actions ? A = a1 . . . ai : ? (h) = Pr[? O | h, do(? A )]. If Q = {?1 , . . . , ?n } is a set of tests, we write Q(h) = (?1 (h), . . . , ?n (h))T for the corresponding vector of test predictions. Formally, a PSR consists of five elements hA, O, Q, s1 , F i. A is a finite set of possible actions, and O is a finite set of possible observations. Q is a core set of tests, i.e., a set whose vector of predictions Q(h) is a sufficient statistic for predicting the success probabilities of all tests. F is the set of functions f? which embody these predictions: ? (h) = f? (Q(h)). And, m1 = Q() is the initial prediction vector. In this work we will restrict ourselves to linear PSRs, in which all prediction functions are linear: f? (Q(h)) = r?T Q(h) for some vector r? ? R|Q| . Finally, a core set Q is minimal if the tests in Q are linearly independent [17, 18], i.e., no one test?s prediction is a linear function of the other tests? predictions. Since Q(h) is a sufficient statistic for all tests, it is a state for our PSR: i.e., we can remember just Q(h) instead of h itself. After action a and observation o, we can update Q(h) recursively: if we T write Mao for the matrix with rows rao? for ? ? Q, then we can use Bayes? Rule to show: Q(hao) = Mao Q(h) Mao Q(h) = T Pr[o | h, do(a)] m? Mao Q(h) 4 (11) where m? is a normalizer, defined by mT ? Q(h) = 1 for all h. In addition to the above PSR parameters, for reinforcement learning we need a reward function R(h) = ? T Q(h) mapping predictive states to immediate rewards, a discount factor ? ? [0, 1] which weights the importance of future rewards vs. present ones, and a policy ?(Q(h)) mapping from predictive states to actions. Instead of ordinary PSRs, we will work with transformed PSRs (TPSRs) [6, 7]. TPSRs are a generalization of regular PSRs: a TPSR maintains a small number of sufficient statistics which are linear combinations of a (potentially very large) set of test probabilities. That is, a TPSR maintains a small number of feature predictions instead of test predictions. TPSRs have exactly the same predictive abilities as regular PSRs, but are invariant under similarity transforms: given an invertible matrix T ?1 S, we can transform m1 ? Sm1 , mT , and Mao ? SMao S ?1 without changing the ? ? m? S corresponding dynamical system, since pairs S ?1 S cancel in Eq. 11. The main benefit of TPSRs over regular PSRs is that, given any core set of tests, low dimensional parameters can be found using spectral matrix decomposition and regression instead of combinatorial search. In this respect, TPSRs are closely related to the transformed representations of LDSs and HMMs found by subspace identification [19, 20, 14, 21]. 4.1 Learning Transformed PSRs Let Q be a minimal core set of tests, so that n = |Q| is the linear dimension of the system. Then, let T be a larger core set of tests (not necessarily minimal), and let H be the set of all possible histories. ` T ` As before, write ?H t ? R for a vector of features of history at time t, and write ?t ? R for a vector of features of the future at time t. Since T is a core set of tests, by definition we can compute any test prediction ? (h) as a linear function of T (h). And, since feature predictions are linear combinations of test predictions, we can also compute any feature prediction ?(h) as a linear function of T (h). We define the matrix ?T ? R`?|T | to embody our predictions of future features: an entry of ?T is the weight of one of the tests in T for calculating the prediction of one of the features in ?T . Below we define several covariance matrices, Equation 12(a?d), in terms of the observable quantities ?Tt , ?H t , at , and ot , and show how these matrices relate to the parameters of the underlying PSR. These relationships then lead to our learning algorithm, Eq. 14 below. T H | ht ? ?]. Given First we define ?H,H , the covariance matrix of features of histories, as E[?H t ?t k samples, we can approximate this covariance: b H,H = 1 ?H ?H T . ? (12a) k 1:k 1:k b H,H converges to the true covariance ?H,H with probability 1. Next we define ?S,H , As k ? ?, ? the cross covariance of states hand features of histories. iWriting st = Q(ht ) for the (unobserved) T state at time t, let ?S,H = E k1 s1:k ?H 1:k ht ? ? (?t) . We cannot directly estimate ?S,H from data, but this matrix will appear as a factor in several of the matrices that we define below. Next we define ?T ,H , the cross covariance matrix of the features of tests and histories (see [12] for derivations): T b T ,H ? 1 ?T ?H T ? ?T ,H ? E[?Tt ?H | ht ? ?, do(?)] = ?T R?S,H (12b) t k 1:k 1:k where row ? of the matrix R is r? , the linear function that specifies the prediction of the test ? given the predictions of tests in the core set Q. By do(?), we mean to approximate the effect of executing all sequences of actions required by all tests or features of the future at once. This is not difficult in our experiments (in which all tests use compatible action sequences); but see [12] for further discussion. Eq. 12b tells us that, because of our assumptions about linear dimension, the matrix ?T ,H has factors R ? R|T |?n and ?S,H ? Rn?` . Therefore, the rank of ?T ,H is no more than n, the linear dimension of the system. We can also see that, since the size of ?T ,H is fixed, as the b T ,H ? ?T ,H with probability 1. number of samples k increases, ? Next we define ?H,ao,H , a set of matrices, one for each action-observation pair, that represent the covariance between features of history before and after taking action a and observing o. In the following, It (o) is an indicator variable for whether we see observation o at step t. T b H,ao,H ? 1 Pk ?H It (o)?H ?H,ao,H ? E [?H,ao,H | ht ? ? (?t), do(a) (?t)] (12c) ? t k t=1 t+1 b H,ao,H are fixed, as k ? ? these empirical covariances converge Since the dimensions of each ? to the true covariances ?H,ao,H with probability 1. Finally we define ?R,H , and approximate the covariance (in this case a vector) of reward and features of history: 5 b R,H ? ? 1 k Pk Rt ?H t t=1 T T ?R,H ? E[Rt ?H | ht ? ?] = ? T ?S,H t (12d) b R,H converges to ?R,H with probability 1. Again, as k ? ?, ? We now wish to use the above-defined matrices to learn a TPSR from data. To do so we need to make a somewhat-restrictive assumption: we assume that our features of history are rich enough to H determine the state of the system, i.e., the regression from ?H to s is exact: st = ?S,H ??1 H,H ?t . T T We discuss how to relax this assumption in [12]. We also need a matrix U such that U ? R is invertible; with probability 1 a random matrix satisfies this condition, but as we will see below, there are reasons to choose U via SVD of a scaled version of ?T ,H as described in Sec. 3.1. Using our assumptions we can show a useful identity for ?H,ao,H (for proof details see [12]): ?S,H ??1 H,H ?H,ao,H = Mao ?S,H (13) This identity is at the heart of our learning algorithm: it states that ?H,ao,H contains a hidden copy of Mao , the main TPSR parameter that we need to learn. We would like to recover Mao via Eq. 13, ? Mao = ?S,H ??1 H,H ?H,ao,H ?S,H ; but of course we do not know ?S,H . Fortunately, it turns out that we can use U T ?T ,H as a stand-in, since this matrix differs only by an invertible transform (Eq. 12b). We now show how to recover a TPSR from the matrices ?T ,H , ?H,H , ?R,H , ?H,ao,H , and U . Since a TPSR?s predictions are invariant to a similarity transform of its parameters, our algorithm only recovers the TPSR parameters to within a similarity transform [7, 12]. T T bt ? U T ?T ,H (?H,H )?1 ?H t = (U ? R)st ?1 T Bao ? U ?T ,H (?H,H ) bT ? T ? (14a) ? T T T T T ?1 ?H,ao,H (U ?T ,H ) = (U ? R)Mao (U ? R) T T T ?1 ? ?R,H (U ?T ,H ) = ? (U ? R) (14b) (14c) Our PSR learning algorithm is simple: replace each true covariance matrix in Eq. 14 by its empirical estimate. Since the empirical estimates converge to their true values with probability 1 as the sample size increases, our learning algorithm is clearly statistically consistent. 4.2 Predictive State Temporal Difference Learning (Revisited) Finally, we are ready to show that the model-free PSTD learning algorithm introduced in Section 3.2 is equivalent to a model-based algorithm built around PSR learning. For a fixed policy ?, a PSR or TPSR?s value function is a linear function of state, VP (s) = wT s, and is the solution of the T T PSR Bellman equation [22]: for all s, w s = b? s + ? o?O wT B?o s, or equivalently, wT = P T bT learned PSR parameters from Equations 14(a?c), we get ? +? o?O w B?o . Substituting in our ?1 b T T Tb b T ,H )? b T ,H )? + ? P b b R,H (U T ? ?H,?o,H (U T ? w =? o?O w U ?T ,H (?H,H ) b T ,H = ? b R,H + ?wT U T ? b T ,H (? b H,H )?1 ? b H+ ,H wT U T ? P b H,?o,H = ? b H+ ,H . Now, define U b and since, by comparing Eqs. 12c and 9, we can see that o?O ? Tb b b b b V as in Eq. 7, and let U = U as suggested above in Sec. 4.1. Then U ?T ,H = V ?H,H , and  ? b H,H = ? b R,H + ?wT Vb ? b H+ ,H =? wT = ? b R,H Vb ? b H,H ? ? Vb ? b H+ ,H wT Vb ? (15) Eq. 15 is exactly Eq. 10, the PSTD algorithm. So, we have shown that, if we learn a PSR by the subspace identification algorithm of Sec. 4.1 and then compute its value function via the Bellman equation, we get the exact same answer as if we had directly learned the value function via the model-free PSTD method. In addition to adding to our understanding of both methods, an important corollary of this result is that PSTD is a statistically consistent algorithm for PSR value function approximation?to our knowledge, the first such result for a TD method. 5 Experimental Results 5.1 Estimating the Value Function of a RR-POMDP We evaluate the PSTD learning algorithm on a synthetic example derived from [23]. The problem is to find the value function of a policy in a partially observable Markov decision Process (POMDP). The POMDP has 4 latent states, but the policy?s transition matrix is low rank: the resulting belief distributions lie in a 3-dimensional subspace of the original belief simplex (see [12] for details). 6 B. C. 15 15 10 10 5 5 5 0 0 ?5 ?5 ?10 ?10 LSTD LARS-TD PSTD J? 15 Value 10 1 2 3 State 4 1 LSTD LARS-TD PSTD J? 2 3 State LSTD LARS-TD PSTD J? 0 ?5 4 ?10 1 2 3 State 4 D. Expected Reward A. 1.30 1.25 1.20 Threshold LSTD (16) LSTD LARS-TD PSTD 1.15 1.10 1.05 1.00 0.95 0 5 10 15 20 Policy Iteration 25 30 Figure 1: Experimental Results. Error bars indicate standard error. (A) Estimating the value function with a small number of informative features. All three approaches do well. (B) Estimating the value function with a small set of informative features and a large set of random features. LARSTD is designed for this scenario and dramatically outperforms PSTD and LSTD. (C) Estimating the value function with a large set of semi-informative features. PSTD is able to determine a small set of compressed features that retain the maximal amount of information about the value function, outperforming LSTD and LARS-TD. (D) Pricing a high-dimensional derivative via policy iteration. The optimal threshold strategy (sell if price is above a threshold [24]) is in black, LSTD (16 canonical features) is in blue, LSTD (on the full 220 features) is cyan, LARS-TD (feature selection from set of 220) is in green, and PSTD (16 dimensions, compressing 220 features) is in red. We perform 3 experiments, comparing the performance of LSTD, LARS-TD, and PSTD when different sets of features are used. In each case we compare the value function estimated by each algorithm to the true value function computed by J ? = R(I ? ?T ? )?1 . In the first experiment we execute the policy ? for 1000 time steps. We split the data into overlapping histories and tests of length 5, and sample 10 of these histories and tests to serve as centers for Gaussian radial basis functions. We then evaluate each basis function at every remaining sample. Then, using these features, we learned the value function using LSTD, LARS-TD, and PSTD with linear dimension 3 (Figure 1(A)). Each method estimated a reasonable value function. For the second experiment, we added 490 random, uninformative features to the 10 good features and then attempted to learn the value function with each of the 3 algorithms (Figure 1(B)). In this case, LSTD and PSTD both had difficulty fitting the value function due to the large number of irrelevant features. LARS-TD, designed for precisely this scenario, was able to select the 10 relevant features and estimate the value function better by a substantial margin. For the third experiment, we increased the number of sampled features from 10 to 500. In this case, each feature was somewhat relevant, but the number of features was large compared to the amount of training data. This situation occurs frequently in practice: it is often easy to find a large number of features that are at least somewhat related to state. PSTD outperforms LSTD and LARS-TD by summarizing these features and efficiently estimating the value function (Figure 1(C)). 5.2 Pricing A High-dimensional Financial Derivative Derivatives are financial contracts with payoffs linked to the future prices of basic assets such as stocks, bonds and commodities. In some derivatives the contract holder has no choices, but in more complex cases, the holder must make decisions, and the value of the contract depends on how the holder acts?e.g., with early exercise the holder can decide to terminate the contract at any time and receive payments based on prevailing market conditions, so deciding when to exercise is an optimal stopping problem. Stopping problems provide an ideal testbed for policy evaluation methods, since we can collect a single data set which lets us evaluate any policy: we just choose the ?continue? action forever. (We can then evaluate the ?stop? action easily in any of the resulting states.) We consider the financial derivative introduced by Tsitsiklis and Van Roy [24]. The derivative generates payoffs that are contingent on the prices of a single stock. At the end of each day, the holder may opt to exercise. At exercise the holder receives a payoff equal to the current price of the stock divided by the price 100 days beforehand. We can think of this derivative as a ?psychic call?: the holder gets to decide whether s/he would like to have bought an ordinary 100-day European call option, at the then-current market price, 100 days ago. In our simulation (and unknown to the investor), the underlying stock price follows a geometric Brownian motion with volatility ? = 0.02 and continuously compounded short term growth rate ? = 0.0004. Assuming stock prices fluctuate only on days when the market is open, these parameters correspond to an annual growth rate of ? 10%. In more detail, if wt is a standard Brownian motion, then the stock price pt evolves as ?pt = ?pt ?t + ?pt ?wt , and we can summarize relevant state at the end of each day as a vector 7 pt t?99 t?98 xt ? R100 , with xt = ( ppt?100 , ppt?100 , . . . , pt?100 )T . This process is Markov and ergodic [24, 25]: xt and xt+100 are independent and identically distributed. The immediate reward for exercising the option is G(x) = x(100), and the immediate reward for continuing to hold the option is 0. The discount factor ? = e?? is determined by the growth rate; this corresponds to assuming that the risk-free interest rate is equal to the stock?s growth rate, meaning that the investor gains nothing in expectation by holding the stock itself. The value of the derivative, if the current state is x, is given by V ? (x) = supt E[? t G(xt ) | x0 = x]. Our goal is to calculate an approximate value function V (x) = wT ?H (x), and then use this value function to generate a stopping time min{t | G(xt ) ? V (xt )}. To do so, we sample a sequence of 1,000,000 states xt ? R100 and calculate features ?H of each state. We then perform policy iteration on this sample, alternately estimating the value function under a given policy and then using this value function to define a new greedy policy ?stop if G(xt ) ? wT ?H (xt ).? Within the above strategy, we have two main choices: which features do we use, and how do we estimate the value function in terms of these features. For value function estimation, we used LSTD, LARS-TD, or PSTD. In each case we re-used our 1,000,000-state sample trajectory for all iterations: we start at the beginning and follow the trajectory as long as the policy chooses the ?continue? action, with reward 0 at each step. When the policy executes the ?stop? action, the reward is G(x) and the next state?s features are all 0; we then restart the policy 100 steps in the future, after the process has fully mixed. For feature selection, we are fortunate: previous researchers have hand-selected a ?good? set of 16 features for this data set through repeated trial and error (see [12] and [24, 25]). We greatly expand this set of features, then use PSTD to synthesize a small set of high-quality combined features. Specifically, we add the entire 100-step state vector, the squares of the components of the state vector, and several additional nonlinear features, increasing the total number of features from 16 to 220. We use histories of length 1, tests of length 5, and (for comparison?s sake) we choose a linear dimension of 16. Tests (but not histories) were value-directed by reducing the variance of all features except reward by a factor of 100. Figure 1D shows results. We compared PSTD (reducing 220 to 16 features) to LSTD with either the 16 hand-selected features or the full 220 features, as well as to LARS-TD (220 features) and to a simple thresholding strategy [24]. In each case we evaluated the final policy on 10,000 new random trajectories. PSTD outperformed each of its competitors, improving on the next best approach, LARS-TD, by 1.75 percentage points. In fact, PSTD performs better than the best previously reported approach [24, 25] by 1.24 percentage points. These improvements correspond to appreciable fractions of the risk-free interest rate (which is about 4 percentage points over the 100 day window of the contract), and therefore to significant arbitrage opportunities: an investor who doesn?t know the best strategy will consistently undervalue the security, allowing an informed investor to buy it for below its expected value. 6 Conclusion In this paper, we attack the feature selection problem for temporal difference learning. Although well-known temporal difference algorithms such as LSTD can provide asymptotically unbiased estimates of value function parameters in linear architectures, they can have trouble in finite samples: if the number of features is large relative to the number of training samples, then they can have high variance in their value function estimates. For this reason, in real-world problems, a substantial amount of time is spent selecting a small set of features, often by trial and error [24, 25]. To remedy this problem, we present the PSTD algorithm, a new approach to feature selection for TD methods, which demonstrates how insights from system identification can benefit reinforcement learning. PSTD automatically chooses a small set of features that are relevant for prediction and value function approximation. It approaches feature selection from a bottleneck perspective, by finding a small set of features that preserves only predictive information. Because of the focus on predictive information, the PSTD approach is closely connected to PSRs: under appropriate assumptions, PSTD?s compressed set of features is asymptotically equivalent to TPSR state, and PSTD is a consistent estimator of the PSR value function. We demonstrate the merits of PSTD compared to two popular alternative algorithms, LARS-TD and LSTD, on a synthetic example, and argue that PSTD is most effective when approximating a value function from a large number of features, each of which contains at least a little information about state. Finally, we apply PSTD to a difficult optimal stopping problem, and demonstrate the practical utility of the algorithm by outperforming several alternative approaches and topping the best reported previous results. 8 References [1] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3(1):9?44, 1988. [2] Justin A. Boyan. Least-squares temporal difference learning. In Proc. Intl. Conf. Machine Learning, pages 49?56. Morgan Kaufmann, San Francisco, CA, 1999. [3] Steven J. Bradtke and Andrew G. Barto. Linear least-squares algorithms for temporal difference learning. In Machine Learning, pages 22?33, 1996. [4] Michail G. Lagoudakis and Ronald Parr. Least-squares policy iteration. J. Mach. Learn. Res., 4:1107? 1149, 2003. [5] Ronald Parr, Lihong Li, Gavin Taylor, Christopher Painter-Wakefield, and Michael L. Littman. An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning. In ICML ?08: Proceedings of the 25th international conference on Machine learning, pages 752?759, New York, NY, USA, 2008. ACM. [6] Matthew Rosencrantz, Geoffrey J. Gordon, and Sebastian Thrun. Learning low dimensional predictive representations. In Proc. ICML, 2004. [7] Byron Boots, Sajid M. Siddiqi, and Geoffrey J. Gordon. Closing the learning-planning loop with predictive state representations. In Proceedings of Robotics: Science and Systems VI, 2010. [8] Pascal Poupart and Craig Boutilier. Value-directed compression of pomdps. In NIPS, pages 1547?1554, 2002. [9] J. Zico Kolter and Andrew Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In ICML ?09: Proceedings of the 26th Annual International Conference on Machine Learning, pages 521?528, New York, NY, USA, 2009. ACM. [10] Gregory C. Reinsel and Rajabather Palani Velu. Multivariate Reduced-rank Regression: Theory and Applications. Springer, 1998. [11] Gene H. Golub and Charles F. Van Loan. Matrix Computations. The Johns Hopkins University Press, 1996. [12] Byron Boots and Geoffrey J. Gordon. Predictive state temporal difference learning. Technical report, arXiv.org. [13] Harold Hotelling. The most predictable criterion. Journal of Educational Psychology, 26:139?142, 1935. [14] S. Soatto and A. Chiuso. Dynamic data factorization. Technical report, UCLA, 2001. [15] Michael Littman, Richard Sutton, and Satinder Singh. Predictive representations of state. In Advances in Neural Information Processing Systems (NIPS), 2002. [16] Judea Pearl. Causality: models, reasoning, and inference. Cambridge University Press, 2000. [17] Herbert Jaeger. Observable operator models for discrete stochastic time series. Neural Computation, 12:1371?1398, 2000. [18] Satinder Singh, Michael James, and Matthew Rudary. Predictive state representations: A new theory for modeling dynamical systems. In Proc. UAI, 2004. [19] P. Van Overschee and B. De Moor. Subspace Identification for Linear Systems: Theory, Implementation, Applications. Kluwer, 1996. [20] Tohru Katayama. Subspace Methods for System Identification. Springer-Verlag, 2005. [21] Daniel Hsu, Sham Kakade, and Tong Zhang. A spectral algorithm for learning hidden Markov models. In COLT, 2009. [22] Michael R. James, Ton Wessling, and Nikos A. Vlassis. Improving approximate value iteration using memories and predictive state representations. In AAAI, 2006. [23] Sajid Siddiqi, Byron Boots, and Geoffrey J. Gordon. Reduced-rank hidden Markov models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS-2010), 2010. [24] John N. Tsitsiklis and Benjamin Van Roy. Optimal stopping of Markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives. IEEE Transactions on Automatic Control, 44:1840?1851, 1997. [25] David Choi and Benjamin Roy. A generalized Kalman filter for fixed point approximation and efficient temporal-difference learning. Discrete Event Dynamic Systems, 16(2):207?239, 2006. 9
3952 |@word trial:2 version:1 compression:14 instrumental:2 seems:1 open:1 seek:1 simulation:1 covariance:16 decomposition:2 pick:1 harder:1 recursively:1 initial:3 contains:5 series:1 selecting:1 daniel:1 tuned:1 outperforms:2 current:4 comparing:2 must:2 written:1 john:2 ronald:2 informative:3 designed:5 update:1 v:1 greedy:2 selected:3 fewer:1 intelligence:1 beginning:1 core:7 short:1 provides:1 revisited:1 attack:1 org:1 zhang:1 five:1 chiuso:1 prove:1 consists:1 combine:1 fitting:1 introduce:1 x0:1 market:3 expected:3 embody:2 udv:1 ldss:1 frequently:1 planning:1 bellman:9 discounted:3 td:21 automatically:2 little:2 window:1 cardinality:1 increasing:1 project:1 estimating:8 linearity:1 underlying:3 provided:1 what:1 interpreted:1 informed:1 finding:6 unobserved:1 temporal:19 pseudo:1 remember:1 every:1 commodity:1 act:1 growth:4 exactly:3 scaled:1 demonstrates:1 control:1 zico:1 appear:1 before:2 sutton:2 mach:1 black:1 sajid:2 collect:1 hmms:1 factorization:1 gone:1 statistically:4 obeys:1 directed:4 practical:3 practice:2 differs:1 empirical:7 alleviated:1 word:1 radial:1 regular:3 oht:1 suggest:1 ud1:1 get:3 cannot:1 selection:8 operator:4 risk:2 equivalent:4 demonstrated:1 center:1 educational:1 starting:2 pomdp:5 ergodic:1 immediately:1 insight:2 rule:1 estimator:1 financial:4 population:1 pt:6 exact:2 us:2 pa:2 element:1 roy:3 synthesize:1 observed:2 steven:1 calculate:2 compressing:1 connected:1 substantial:3 intuition:1 environment:1 predictable:1 benjamin:2 reward:20 rightt:1 littman:2 dynamic:2 depend:1 solving:3 weakly:1 rewrite:1 singh:2 predictive:35 serve:1 basis:2 r100:2 easily:1 differently:1 stock:8 derivation:1 effective:1 artificial:1 tell:1 choosing:1 whose:2 larger:2 solve:1 relax:1 compressed:5 triangular:1 ability:1 statistic:4 think:1 transform:4 itself:4 noisy:1 final:1 sequence:8 rr:1 propose:1 maximal:1 relevant:10 loop:1 description:1 bao:1 convergence:1 r1:3 intl:1 produce:1 jaeger:1 converges:3 executing:1 help:1 illustrate:1 volatility:1 spent:1 andrew:2 eq:10 c:2 indicate:1 closely:2 filter:1 lars:15 stochastic:1 exercising:1 ao:12 generalization:1 suitability:1 opt:1 topping:1 hold:1 sufficiently:1 around:1 gavin:1 deciding:1 mapping:4 predict:6 parr:4 substituting:1 matthew:2 early:1 estimation:3 proc:3 outperformed:1 combinatorial:1 bond:1 visited:1 largest:1 weighted:1 moor:1 clearly:1 always:1 gaussian:1 supt:1 beb:1 rather:1 reinsel:1 fluctuate:1 barto:1 corollary:1 derived:1 focus:2 improvement:2 consistently:1 rank:4 indicates:1 greatly:1 contrast:1 normalizer:1 sense:3 summarizing:1 inference:1 stopping:7 bt:3 entire:1 hidden:3 expand:1 transformed:3 colt:1 pascal:1 ference:1 prevailing:1 mutual:1 equal:2 once:1 ng:1 sell:1 look:2 cancel:1 icml:3 future:20 simplex:1 report:2 gordon:5 richard:1 ppt:2 preserve:6 simultaneously:1 individual:1 ourselves:1 attempt:1 interest:2 highly:1 evaluation:1 golub:1 yielding:1 beforehand:1 psrs:11 lh:1 unless:1 continuing:2 taylor:1 re:2 guidance:1 theoretical:2 minimal:3 increased:1 column:2 modeling:1 rao:1 ordinary:2 entry:1 reported:4 actionobservation:1 connect:1 answer:1 gregory:1 synthetic:3 chooses:2 combined:1 st:4 international:3 rudary:1 retain:1 contract:5 invertible:3 michael:4 continuously:1 hopkins:1 again:2 aaai:1 choose:3 conf:1 expert:1 derivative:9 velu:1 li:1 potential:1 de:1 sec:3 includes:1 tpsrs:5 kolter:1 explicitly:1 depends:2 vi:1 try:1 lot:1 observing:2 linked:1 red:1 start:2 bayes:1 maintains:2 complicated:1 recover:2 option:3 investor:4 painter:1 square:11 accuracy:1 oi:2 variance:4 holder:7 efficiently:1 who:1 yield:1 nonsingular:1 correspond:2 kaufmann:1 vp:1 identification:11 craig:1 marginally:1 trajectory:5 pomdps:3 multiplying:1 asset:1 researcher:1 ago:1 history:28 executes:1 manual:1 sebastian:1 definition:1 failure:1 competitor:1 james:2 naturally:2 proof:1 recovers:1 judea:1 sampled:2 stop:3 gain:1 hsu:1 popular:2 knowledge:1 dimensionality:1 hilbert:1 day:7 follow:1 improved:1 execute:2 evaluated:1 just:3 wakefield:1 correlation:1 hand:6 receives:1 christopher:1 nonlinear:1 hopefully:1 overlapping:1 quality:1 pricing:3 usa:2 effect:1 contain:1 true:10 unbiased:1 remedy:1 regularization:1 soatto:1 harold:1 criterion:1 generalized:1 trying:1 tt:4 demonstrate:4 complete:1 performs:2 motion:2 bradtke:1 reasoning:1 meaning:1 recently:1 lagoudakis:1 charles:1 mt:2 rl:4 he:1 m1:2 kluwer:1 mellon:2 significant:2 measurement:1 cambridge:1 ai:2 tuning:1 automatic:1 similarly:1 closing:1 had:2 lihong:1 intervene:1 longer:1 similarity:3 add:2 brownian:2 multivariate:1 perspective:5 irrelevant:1 scenario:2 verlag:1 outperforming:2 success:2 continue:2 devise:1 inverted:1 morgan:1 herbert:1 additional:3 remembering:1 somewhat:3 fortunately:1 contingent:1 michail:1 nikos:1 converge:3 maximize:1 determine:3 semi:1 full:2 rj:2 sham:1 compounded:1 technical:2 cross:2 long:2 divided:1 a1:2 feasibility:1 prediction:24 regression:4 basic:1 circumstance:1 cmu:2 expectation:1 arxiv:1 iteration:7 represent:4 robotics:1 receive:2 addition:2 want:1 separately:1 uninformative:1 thirteenth:1 singular:5 biased:1 ot:1 unlike:2 byron:4 call:4 bought:1 ideal:1 split:1 enough:1 easy:1 identically:1 psychology:1 architecture:2 competing:2 restrict:1 reduce:1 bottleneck:5 whether:2 utility:1 york:2 larstd:1 action:18 dramatically:1 useful:2 boutilier:1 tune:1 amount:7 transforms:1 discount:3 siddiqi:2 reduced:2 generate:1 specifies:1 percentage:3 canonical:2 estimated:2 oh1:1 blue:1 carnegie:2 discrete:3 write:7 threshold:3 changing:1 ht:12 asymptotically:2 fraction:1 sum:1 inverse:2 powerful:1 family:1 reasonable:1 decide:2 decision:4 scaling:1 vb:12 cyan:1 unbias:1 annual:2 ahead:1 precisely:1 sake:1 ucla:1 aht:1 generates:1 min:1 department:2 combination:3 appealing:1 kakade:1 evolves:1 s1:2 invariant:2 pr:6 heart:1 equation:14 previously:2 payment:1 discus:2 turn:1 know:5 merit:1 end:2 available:1 apply:1 sm1:1 spectral:3 appropriate:1 hotelling:1 alternative:2 original:1 remaining:1 trouble:1 opportunity:1 calculating:1 exploit:1 restrictive:1 k1:2 build:1 approximating:1 added:1 quantity:1 looked:1 occurs:1 strategy:4 rt:5 diagonal:1 subspace:12 thrun:1 restart:1 poupart:1 argue:1 reason:2 assuming:2 length:4 o1:2 kalman:1 relationship:1 ssid:3 equivalently:2 difficult:5 executed:3 unfortunately:1 potentially:1 relate:1 holding:1 hao:1 implementation:1 policy:25 unknown:4 perform:3 allowing:1 boot:4 observation:11 markov:5 finite:4 truncated:1 immediate:5 situation:4 extended:1 looking:1 payoff:3 vlassis:1 rn:1 arbitrary:2 introduced:2 david:1 pair:5 required:1 connection:1 security:1 learned:4 testbed:1 pearl:1 alternately:1 nip:2 able:2 suggested:1 bar:1 dynamical:5 below:6 justin:1 summarize:1 tb:3 built:1 including:1 green:1 memory:1 belief:2 overschee:1 event:2 difficulty:1 boyan:1 predicting:4 indicator:1 recursion:2 ready:1 speeding:1 katayama:1 prior:4 understanding:1 geometric:1 relative:2 fully:1 mixed:1 interesting:1 geoffrey:5 sufficient:3 consistent:9 s0:4 thresholding:1 uncorrelated:2 row:2 prone:2 course:2 compatible:1 arbitrage:1 repeat:1 free:8 keeping:1 copy:1 tsitsiklis:2 side:1 taking:2 benefit:5 van:4 feedback:1 dimension:8 distributed:1 transition:6 stand:1 rich:1 computes:1 doesn:1 forward:1 world:1 reinforcement:7 san:1 far:1 transaction:1 approximate:7 observable:6 compact:3 forever:1 gene:1 satinder:2 overfitting:1 buy:1 uai:1 pittsburgh:2 francisco:1 don:1 search:1 latent:3 ggordon:1 why:1 learn:7 terminate:1 ca:1 improving:4 necessarily:1 complex:1 european:1 substituted:1 aistats:1 pk:9 main:3 linearly:1 noise:2 nothing:1 repeated:1 causality:1 ny:2 tong:1 mao:10 wish:2 exercise:4 lie:1 fortunate:1 weighting:1 third:1 learns:1 down:1 choi:1 xt:10 ton:1 adding:1 importance:1 margin:2 lt:2 simply:2 psr:14 ordered:3 partially:4 rosencrantz:1 lstd:23 collectively:1 springer:2 corresponds:1 satisfies:1 acm:2 viewed:2 identity:2 goal:1 appreciable:1 replace:2 price:9 change:1 loan:1 determined:2 infinite:1 corrected:1 specifically:1 wt:19 reducing:2 except:1 called:3 total:3 svd:5 experimental:2 attempted:1 exception:1 select:3 formally:1 cholesky:2 latter:1 evaluate:4 correlated:2
3,260
3,953
Factorized Latent Spaces with Structured Sparsity Yangqing Jia1 , Mathieu Salzmann1,2 , and Trevor Darrell1 1 UC Berkeley EECS and ICSI 2 TTI-Chicago {jiayq,trevor}@eecs.berkeley.edu, [email protected] Abstract Recent approaches to multi-view learning have shown that factorizing the information into parts that are shared across all views and parts that are private to each view could effectively account for the dependencies and independencies between the different input modalities. Unfortunately, these approaches involve minimizing non-convex objective functions. In this paper, we propose an approach to learning such factorized representations inspired by sparse coding techniques. In particular, we show that structured sparsity allows us to address the multiview learning problem by alternately solving two convex optimization problems. Furthermore, the resulting factorized latent spaces generalize over existing approaches in that they allow having latent dimensions shared between any subset of the views instead of between all the views only. We show that our approach outperforms state-of-the-art methods on the task of human pose estimation. 1 Introduction Many computer vision problems inherently involve data that is represented by multiple modalities such as different types of image features, or images and surrounding text. Exploiting these multiple sources of information has proven beneficial for many computer vision tasks. Given these multiple views, an important problem therefore is that of learning a latent representation of the data that best leverages the information contained in each input view. Several approaches to addressing this problem have been proposed in the recent years. Multiple kernel learning [2, 24] methods have proven successful under the assumption that the views are independent. In contrast, techniques that learn a latent space shared across the views (Fig. 1(a)), such as Canonical Correlation Analysis (CCA) [12, 3], the shared Kernel Information Embedding model (sKIE) [23], and the shared Gaussian Process Latent Variable Model (shared GPLVM) [21, 6, 15], have shown particularly effective to model the dependencies between the modalities. However, they do not account for the independent parts of the views, and therefore either totally fail to represent them, or mix them with the information shared by all views. To generalize over the above-mentioned approaches, methods have been proposed to explicitly account for the dependencies and independencies of the different input modalities. To this end, these methods factorize the latent space into a shared part common to all views and a private part for each modality (Fig. 1(b)). This has been shown for linear mappings [1, 11], as well as for non-linear ones [7, 14, 20]. In particular, [20] proposed to encourage the shared-private factorization to be nonredundant while simultaneously discovering the dimensionality of the latent space. The resulting FOLS models were shown to yield more accurate results in the context of human pose estimation. This, however, came at the price of solving a complicated, non-convex optimization problem. FOLS also lacks an efficient inference method, and extension from two views to multiple views is not straightforward since the number of shared/latent spaces that need to be explicitly modeled grows exponentially with the number of views. In this paper, we propose a novel approach to finding a latent space in which the information is correctly factorized into shared and private parts, while avoiding the computational burden of previous techniques [14, 20]. Furthermore, our formulation has the advantage over existing shared-private factorizations of allowing shared information between any subset of the views, instead of only be1 Z1 Z X (1) X (2) (a) Z2 Zs X (1) ? ?1 ? X (2) (b) D(1) D(2) X (1) X (2) (c) (1) D ?1 ? ?2 ? ?s (2) (1) D ?s D ?s X (1) (2) D ?2 X (2) (d) Figure 1: Graphical models for the two-view case of (a) shared latent space models [23, 21, 6, 15], (b) shared-private factorizations [7, 14, 20], (c) the global view of our model, where the sharedprivate factorization is automatically learned instead of explicitly separated, and (d) an equivalent shared-private spaces interpretation of our model. Due to structured sparsity, rows ?s of ? are shared across the views, whereas rows ?1 and ?2 are private to view 1 and 2, respectively. tween all views. In particular, we represent each view as a linear combination of view-dependent dictionary entries. While the dictionaries are specific to each view, the weights of these dictionaries act as latent variables and are the same for all the views. Thus, as shown in Fig. 1(c), the data is embedded in a latent space that generates all the views. By exploiting the idea of structured sparsity [26, 18, 4, 17, 9], we encourage each view to only use a subset of the latent variables, and at the same time encourage the whole latent space to be low-dimensional. As a consequence, and as depicted in Fig. 1(d), the latent space is factorized into shared parts which represent information common to multiple views, and private parts which model the remaining information of the individual views. Training the model can be done by alternately solving two convex optimization problems, and inference by solving a convex problem. We demonstrate the effectiveness of our approach on the problem of human pose estimation where the existence of shared and private spaces has been shown [7]. We show that our approach correctly factorizes the latent space and outperforms state-of-the-art techniques. 2 Learning a Latent Space with Structured Sparsity In this section, we first formulate the problem of learning a latent space for multi-view modeling. We then briefly review the concepts of sparse coding and structured sparsity, and finally introduce our approach within this framework. 2.1 Problem Statement and Notations Let X = {X(1) , X(2) , ? ? ? , X(V ) } be a set of N observations obtained from V views, where X(v) ? <Pv ?N contains the feature vectors for the v th view. We aim to find an embedding ? ? <Nd ?N of the data into an Nd -dimensional latent space and a set of dictionaries D = {D(1) , D(2) , ? ? ? , D(V ) }, with D(v) ? <Pv ?Nd the dictionary entries for view v, such that X(v) is generated by D(v) ?, as depicted in Fig. 1(c). More specifically, we seek the latent embedding ? and the dictionaries that best reconstruct the data in the least square sense by solving the optimization problem min D,? V X kX(v) ? D(v) ?k2Fro . (1) v=1 Furthermore, as explained in Section 1, we aim to find a latent space that naturally separates the information shared among several views from the information private to each view. Our approach to addressing this problem is inspired by structured sparsity, which we briefly review below. Throughout this paper, given a matrix A, we will use the term Ai to denote its ith column vector, Ai,? to denote its ith row vector, and A?,? (A?,? ) to denote the submatrix formed by taking a subset of its columns (rows), where the set ? contains the indices of the chosen columns (rows). 2.2 Sparse Coding and Structured Sparsity In the single-view case, sparse coding techniques [16, 25, 13] have been proposed to represent the observed data (e.g., image features) as a linear combination of dictionary entries, while encouraging each observation vector to only employ a subset of all the available dictionary entries. More 2 formally, let X ? <P ?N be the matrix of training examples. Sparse coding aims to find a set of dictionary entries D ? <P ?Nd and the corresponding linear combination weights ? ? <Nd ?N by solving the optimization problem 1 ||X ? D?||2Fro + ??(?) N s.t. ||Di || ? 1 , 1 ? i ? Nd , min (2) D,? where ? is a regularizer that encourages sparsity of its input, and ? is the weight that sets the relative influence of both terms. In practice, when ? is a convex function, problem (2) is convex in D for a fixed ? and vice-versa. Typically, the L1 norm is used to encourage sparsity, which yields ?(?) = N X k?j k1 = j=1 Nd N X X |?i,j | . (3) j=1 i=1 While sparse coding has proven effective in many domains, it fails to account for any structure in the observed data. For instance, in classification tasks, one would expect the observations belonging to the same class to depend on the same subset of dictionary entries. This problem has been addressed by structured sparse coding techniques [26, 4, 9], which encode the structure of the problem in the regularizer. Typically, these methods rely on the notion of groups among the training examples to encourage members of the same group to rely on the same dictionary entries. This can simply be done by re-writing problem (2) as Ng X 1 2 ?(??,?g ) ||X ? D?||Fro + ? min D,? N g=1 (4) s.t. ||Di || ? 1 , 1 ? i ? Nd , where Ng is the total number of groups, ?g represents the indices of the examples that belong to group g, and ??,?g is the matrix containing the weights associated to these examples. To keep the problem convex in ?, ? is usually taken either as the L1,2 norm, or as the L1,? norm, which yield ?(??,?g ) = Nd X ||?i,?g ||2 , or ?(??,?g ) = i=1 Nd X i=1 ||?i,?g ||? = Nd X i=1 max |?i,k | . k??g (5) In general, structured sparsity can lead to more meaningful latent embeddings than sparse coding. For example, [4] showed that the dictionary learned by grouping local image descriptors into images or classes achieved better accuracy than sparse coding for small dictionary sizes. 2.3 Multi-view Learning with Structured Sparsity While the previous framework has proven successful for many tasks, it has only been applied to the single-view case. Here, we propose an approach to multi-view learning inspired by structured sparse coding techniques. To correctly account for the dependencies and independencies of the views, we cast the problem as that of finding a factorization of the latent space into subspaces that are shared across several views and subspaces that are private to the individual views. In essence, this can be seen as having each view exploiting only a subset of the dimensions of the global latent space, as depicted by Fig. 1(d). Note that this definition is in fact more general than the usual definition of shared-private factorizations [7, 14, 20], since it allows latent dimensions to be shared across any subset of the views rather than across all views only. More formally, to find a shared-private factorization of the latent embedding ? that represents the multiple input modalities, we adopt the idea of structured sparsity and aim to find a set of dictionaries D = {D(1) , D(2) , ? ? ? , D(V ) }, each of which uses only a subspace of the latent space. This can be achieved by re-formulating problem (1) as min D,? V V X 1 X kX(v) ? D(v) ?k2Fro + ? ?((D(v) )T ) N v=1 v=1 s.t. ||??,i || ? 1 , 1 ? i ? Nd . 3 (6) where the regularizer ?((D(v) )T ) can be defined using the L1,2 or L1,? norm. In practice, we chose the L1,? norm regularizer which has proven more effective than the L1,2 [18, 17]. Note that, here, we enforce structured sparsity on the dictionary entries instead of on the weights ?. Furthermore, note that this sparsity encourages the columns of the individual D(v) to be zeroed-out instead of the rows in the usual formulation. The intuition behind this is that we expect each view X(v) to only depend on a subset of the latent dimensions. Since X(v) is generated by D(v) ?, having zero-valued columns of D(v) removes the influence of the corresponding latent dimensions on the reconstruction. While the formulation in Eq. 6 encourages each view to only use a limited number of latent dimensions, it doesn?t guarantee that parts of the latent space will be shared across the views. With a sufficiently large number Nd of dictionary entries, the same information can be represented in several parts of the dictionary. This issue is directly related to the standard problem of finding the correct dictionary size. A simple approach would be to manually choose the dimension of the latent space, but this introduces an additional hyperparameter to tune. Instead, we propose to address this issue by trying to find the smallest size of dictionary that still allows us to reconstruct the data well. In spirit, the motivation is similar to [8, 20] that use a relaxation of rank constraints to discover the dimensionality of the latent space. Here, we further exploit structured sparsity and re-write problem (6) as min D,? V V X 1 X kX(v) ? D(v) ?k2Fro + ? ?((D(v) )T ) + ??(?) , N v=1 v=1 (7) where we replaced the constraints on ? by an L1,? norm regularizer that encourages rows of ? to be zeroed-out. This lets us automatically discover the dimensonality of the latent space ?. Furthermore, if there is shared information between several views, this regularizer will favor representing it in a single latent dimension, instead of having redundant parts of the latent space. The optimization problem (7) is convex in D for a fixed ? and vice versa. Thus, in practice, we alternate between optimizing D with a fixed ? and the opposite. Furthermore, to speed up the process, after each iteration, we remove the latent dimensions whose norm is less than a pre-defined threshold. Note that efficient optimization techniques for the L1,? norm have been proposed in the literature [17], enabling efficient optimization algorithms for the problem. 2.4 Inference (1) (V ) At inference, given a new observation {x? , ? ? ? , x? }, the corresponding latent embedding ?? can be obtained by solving the convex problem min ?? V X (v) kx? ? D(v) ?? k22 + ?k?? k1 , (8) v=1 where the regularizer lets us deal with noise in the observations. Another advantage of our model is that it easily allows us to address the case where only a subset of the views are observed at test time. This scenario arises, for example, in human pose estimation, where view X(1) corresponds to image features and view X(2) contains the 3D poses. At inference, (2) (1) the goal is to estimate the pose x? given new image features x? . To this end, we seek to estimate the latent variables ?? , as well as the unknown views from the available views. This is equivalent to first solving the convex problem X (v) min (9) kx? ? D(v) ?? k22 + ?k?? k1 , ?? v?Va (v) where Va is the set of indices of available views. The remaining unobserved views x? (v) are then estimated as x? = D(v) ?? . 3 , v? / Va Related Work While our method is closely related to the shared-private factorization algorithms which we discussed in Section 1, it was inspired by the existing sparse coding literature and therefore is also 4 Method ?(D) ?(?) PCA none none SC (e.g. [25]) none k?T k1,1 P Group SC [4] kDT k1,2 ?g k??,?g k1,2 P ? kD k SSPCA [9] none ?,? ?,2 g ?g P T Group Lasso [26] none k(? ?g ,? ) k1,2 ?g P T Our Method k?k1,? ?g k(D?g ,? ) k1,? ? Here ? denotes the vector l? /l1 quasi-norm. See [9] for details. CD or C? {D|DT D = I} {D|kDi k2 ? 1 ?i ? Nd } none {?|k?i,? k2 ? 1 ?i ? Nd } {D|DT D = I} none Table 1: Properties of the different algorithms that can be viewed as special cases of RMF. related to it. In this section, we first show that many existing techniques can be considered as special cases of a general regularized matrix factorization (RMF) framework, and then discuss the relationships and differences between our method and the existing ones. In general, the RMF problem can be defined as that of factorizing a P ?N matrix X into the product of a P ? M matrix D and an M ? N matrix ? so that the residual error is minimized. Furthermore, RMF exploits structured or unstructured regularizers to constrain the forms of D and ?. This can be expressed as the optimization problem 1 kX ? D?k2Fro + ??(D) + ??(?) D,? N s.t. D ? CD , ? ? C? , min (10) where CD and C? are the domains of the dictionary D and of latent embedding ?, respectively. These domains allow to enforce additional constraints on those matrices. Several existing algorithms, such as PCA, sparse coding (SC), group SC, structured sparse PCA (SSPCA) and group Lasso, can be considered as special cases of this general framework. Table 1 lists the regularization terms and constraints used by these different algorithms. Algorithms relying on structured sparsity exploit different types of matrix norm1 to impose sparsity and different ways of grouping the rows or columns of D and ? using algorithm-specific knowledge. Group sparse coding [4] relies on supervised information such as class labels to define the groups, while in our case, we exploit the natural separation provided by the multiple views. As a result, while group sparse coding finds dictionary entries that encode class-related information, our method finds latent spaces factorized into subspaces shared among different views and subspaces private to the individual views. Furthermore, while structured sparsity is typically enforced on ?, our method employs it on the dictionary. This also is the case of [9] in their SSPCA algorithm. However, while in our approach the groups are taken as subsets of the rows of D, their method follows the more usual approach of defining the groups as subsets of its columns. Their intuition for doing so was to encourage dictionary entries to represent the variability of parts of the observation space, such as the variability of the eyes in the context of face images. Finally, it is worth noting that imposing structured sparsity regularization on both D and ? naturally yields a multi-view, multi-class latent space learning algorithm that can be deemed as a generalization of several algorithms summarized here. 4 Experimental Evaluation In this section, we show the results of our approach on learning factorized latent spaces from multiview inputs. We compare our results against those obtained with state-of-the-art techniques on the task of human pose estimation. 4.1 Toy Example First, we evaluated our approach on the same toy case used by [20]. This shows our method?s ability to correctly factorize a latent space into shared and private parts. This toy example consists of two 1 In our paper, we define the Lp,q norm of a matrix A to be the p-norm of the vector containing of the q-norms of the matrix rows, i.e., kAkp,q = (kA1,? kq , kA2,? kq , ? ? ? , kAn,? kq ) . p 5 1 Correlated Noise 0 Shared 0 ?1 ?1 1 1 0.02 0 ?0.02 0.5 ?1 (a) Generative Signal (View 1) 0 ?0.5 0.5 0 X(2) 0 Private2 Private1 X(1) Shared 1 0 ?0.5 ?1 (b) Generative Signal (View 2) 1 1 0 0 ?1 ?1 1 1 0 0 ?1 ?1 1 1 0 0 ?1 ?1 (c) Observations Dictionary for View 1 Dictionary for View 2 5 5 10 10 15 15 20 20 1 2 3 1 2 3 (d) CCA (e) Our Method (f) Dictionaries Figure 2: Latent spaces recovered on a toy example. (a,b) Generative signals for the two views. (c) Correlated noise and the two 20D input views. (d) First 3 dimensions recovered by CCA. (e) 3-dimensional latent space recovered with our method. Note that, as opposed to CCA, our approach correctly recovered the generative signals and discarded the noise. (f) Dictionaries learned by our algorithm for each view. Fully white columns correspond to zero-valued vectors; note that the dictionary for each view uses only the shared dimension and its own private dimension. views generated from one shared signal and one private signal per view depicted by Fig. 2(a,b). In particular, we used sinusoidal signals at different frequencies such that ? ?(1) = [sin(2?t); cos(? 2 t))], ?(2) = [sin(2?t); cos( 5?t))] , (11) where t was sampled from a uniform distribution in the interval (?1, 1). This yields a 3-dimensional ground-truth latent space, with 1 shared dimension and 2 private dimensions. The observations X(v) were generated by randomly projecting the ?(v) into 20-dimensional spaces and adding Gaussian noise with variance 0.01. Finally, we added noise of the form ynoise = 0.02 sin(3.6?t) to both views to simulate highly correlated noise. The input views are depicted in Fig. 2(c) To initialize our method, we first applied PCA separately on both views, as well as on the concatenation of the views, and in each case, kept the components representing 95% of the variance. We took ? as the concatenation of the corresponding weights. Note that the fact that this latent space is redundant is dealt with by our regularization on ?. We then alternately optimized D and ?, and let the algorithm determine the optimal latent dimensionality. Fig. 2(e,f) depicts the reconstructed latent spaces for both views, as well as the learned dictionaries, which clearly show the shared-private factorization. In Fig. 2(d), we show the results obtained with CCA. Note that our approach correctly discovered the original generative signals and discarded the noise, whereas CCA recovered the shared signal, but also the correlated noise and an additional noise. This confirms that our approach is well-suited to learn shared-private factorizations, and shows that CCA-based approaches [1, 11] tend to be sensitive to noise. 4.2 Human Pose Estimation We then applied our method to the problem of human pose estimation, in which the task is to recover 3D poses from 2D image features. It has been shown that this problem is ambiguous, and that sharedprivate factorizations helped accounting for these ambiguities. Here, we used the HumanEva dataset [22] which consists of synchronized images and motion capture data describing the 3D locations of the 19 joints of a human skeleton. These two types of observations can be seen as two views of the same problem from which we can learn a latent space. In our experiments, we compare our results with those of several regression methods that directly learn a mapping from image features to 3D poses. In particular, we used linear regression (LinReg), Gaussian Process regression with a linear kernel (GP-lin) and with an RBF kernel (GP-rbf), and nearest-neighbor in the feature space (NN). We also compare our results with those obtained with the FOLS-GPLVM [20], which also proposes a shared-private factorization of the latent space. Note that we did not compare against other shared-private factorizations [7, 14], or purely shared 6 Data Jogging Walking Lin-Reg 1.420 2.167 GP-lin 1.429 2.363 GP-rbf 1.396 2.330 NN 1.436 2.175 FOLS 1.461 2.137 Our Method 0.954 1.322 15 3D Pose 2 4 6 8 4 6 10 10 20 30 40 50 5 15 10 10 20 30 40 50 2 Image Features 5 10 3D Pose Image Features Table 2: Mean squared errors between the ground truth and the reconstructions obtained by different methods. 8 10 2 4 6 8 10 12 14 2 4 6 8 10 12 14 PHOG (a) jogging (b) walking 20 RT 20 40 60 3D Pose 40 10 20 30 40 50 5 10 15 20 5 10 15 20 5 10 15 20 (c) walking with multiple features Figure 3: Dictionaries learned from the HumanEva data. Each column corresponds to a dictionary entry. (a) and (b) show the 2-view case, and (c) shows a three-view case. Note that in (c) our model found latent dimensions shared among all views, but also shared between the image features only. models [21, 6, 15, 23], since they were shown to be outperformed by the FOLS-GPLVM [20] for human pose estimation. To initialize the latent spaces for our model and for the FOLS-GPLVM, we proceeded similarly as for the toy example; We applied PCA on both views separately, as well as on the concatenated views, and retained the components representing 95% of the variance. In our case, we set ? to be the concatenation of the corresponding PCA weights. For the FOLS-GPLVM, we initialized the shared latent space with the coefficients of the joint PCA, and the private spaces with those of the individual PCAs. We performed cross validation on the jogging data, and the optimal setting ? = 0.01 and ? = 0.1 was then fixed for all experiments. At inference for human pose estimation, only one of the views (i.e., the images) is available. As shown in Section 2.4, our model provides a natural way to deal with this case by computing the latent variables from the image features first, and then recovering the 3D coordinates using the learned dictionary. For the FOLS-GPLVM, we followed the same strategy as in [20]; we computed the nearest-neighbor among the training examples in image feature space and took the corresponding shared and private latent variables that we mapped to the pose. No special care was required for the other baselines, since they explicitly take the images as inputs and the poses as outputs. As a first case, we used hierarchical features [10] computed on the walking and jogging video sequences of the first subject seen from a single camera. As the subject moves in circles, we used the first loop to train our model, and the remaining ones for testing. Table 2 summarizes the mean squared reconstruction error for all the methods. Note that our approach yields a smaller error than the other methods. In Fig. 3(a,b), we show the factorization of the latent space obtained by our approach by displaying the learned dictionaries 2 . For the jogging case our algorithm automatically found a low-dimensional latent space of 10 dimensions, with a 4D private space for the image features, a 4D shared space, and a 2D private space for the 3D pose3 . For the walking case, the 2 Note that the latent space per se is a dense, low-dimensional space, and whether a dimension is private or shared among multiple views is determined by the corresponding dictionary entries. 3 A latent dimension is considered private if the norm of the corresponding dictionary entry in the other view is smaller than 10% of the average norm of the dictionary entries for that view. 7 Feature PHOG RT PHOG+RT Lin-Reg 1.190 1.345 1.159 GP-lin 1.167 1.272 1.042 GP-rbf 0.839 0.827 0.727 NN 1.279 1.067 1.090 FOLS 1.277 1.068 1.015 Our Method 0.778 1.141 0.769 ?=0 2.886 3.962 1.306 ?=0 0.863 1.235 0.794 Table 3: Mean squared errors for different choices of image features. The last two columns show the result of our method while forcing one regularization term to be zero. See text for details. Mean Squared Error with PHOG Features Mean Squared Error with RT Features 3.5 Mean Squared Error with both Features 5 Our Method GP?lin GP?rbf nn FOLS 4 MSE 3.5 MSE 2.5 4.5 2 3 3 2.5 2.5 2 2 1.5 1.5 1.5 1 Our Method GP?lin GP?rbf nn FOLS 3.5 MSE 3 4 Our Method GP?lin GP?rbf nn FOLS 0 20 40 60 number of training data (a) PHOG 80 100 1 0 20 40 60 number of training data (b) RT 80 100 1 0 20 40 60 number of training data 80 100 (c) PHOG+RT Figure 4: Mean squared error as a function of the number of training examples using PHOG features only, RT features only, or both feature types simultaneously. private space for the image features was found to be higher-dimensional. This can partially explain why the other methods did not perform as well as in the jogging case. Next, we evaluated the performance of the same algorithms for different image features. In particular, we used randomized tree (RT) features generated by [19], and PHOG features [5]. For this case, we only considered the walking sequence and similarly trained the different methods using the first cycle and tested on the rest of the sequence. The top two rows of Table 3 show the results of the different approaches for the individual features. Note that, with the RT features that were designed to eliminate the ambiguities in pose estimation, GP regression with an RBF kernel performs slightly better than us. However, this result is outperformed by our model with PHOG features. To show the ability of our method to model more than two views, we learned a latent space by simultaneously using RT features, PHOG features and 3D poses. The last row of Table 3 shows the corresponding reconstruction errors. In this case, we used the concatenated features as input to Lin-Reg, GP-lin and NN. For GP-rbf, we relied on kernel combination to predict the pose from multiple features. For the FOLS model, we applied the following inference strategy. We computed the NN in feature space for both features individually and took the mean of the corresponding shared latent variables. We then obtained the private part by computing the NN in shared space and taking the corresponding private variables. Note that this proved more accurate than using NN on a single view, or on the concatenated views. Also, notice in Table 3 that the performance drops when structured sparsity is only imposed on either D?s or ?, showing the advantage of our model over simple structured sparsity approaches. Fig. 3(c) depicts the dictionary found by our method. Note that our approach allowed us to find latent dimensions shared among all views, as well as shared among the image features only. Finally, we studied the influence of the number of training examples on the performance of the different approaches. To this end, we varied the training set size from 5 to 100, and, for each size, randomly sampled 10 different training sets on the first walking cycle. In all cases, we kept the same test set as before. Fig. 4 shows the mean squared errors averaged over the 10 different sets as a function of the number of training examples. Note that, with small training sets, our method yields more accurate results than the baselines. 5 Conclusion In this paper, we have proposed an approach to learning a latent space factorized into dimensions shared across subsets of the views and dimensions private to each individual view. To this end, we have proposed to exploit the notion of structured sparsity, and have shown that multi-view learning could be addressed by alternately solving two convex optimization problems. We have demonstrated the effectiveness of our approach on the task of estimating 3D human pose from image features. In the future, we intend to study the use of our model to other tasks, such as classification. To this end, we would extend our approach to incorporating an additional group sparsity regularizer on the latent variables to encode class membership. 8 References [1] C. Archambeau and F. Bach. Sparse probabilistic projections. In Neural Information Processing Systems, 2008. [2] F. Bach, G. Lanckriet, and M. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In International Conference on Machine learning. ACM New York, NY, USA, 2004. [3] F. R. Bach and M. I. Jordan. A probabilistic interpretation of canonical correlation analysis. Technical Report 688, Department of Statistics, University of California, Berkeley, 2005. [4] S. Bengio, F. Pereira, Y. Singer, and D. Strelow. Group sparse coding. Neural Information Processing Systems, 2009. [5] A. Bosch, A. Zisserman, and X. Munoz. Image classification using random forests and ferns. In International Conference on Computer Vision, 2007. [6] C. H. Ek, P. Torr, and N. Lawrence. Gaussian process latent variable models for human pose estimation. In Joint Workshop on Machine Learning and Multimodal Interaction, 2007. [7] C. H. Ek, P. Torr, and N. Lawrence. Ambiguity modeling in latent spaces. In Joint Workshop on Machine Learning and Multimodal Interaction, 2008. [8] A. Geiger, R. Urtasun, and T. Darrell. Rank priors for continuous non-linear dimensionality reduction. In Conference on Computer Vision and Pattern Recognition, 2009. [9] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, May 2010. [10] A. Kanaujia, C. Sminchisescu, and D. N. Metaxas. Semi-supervised hierarchical models for 3d human pose reconstruction. In Conference on Computer Vision and Pattern Recognition, 2007. [11] A. Klami and S. Kaski. Probabilistic approach to detecting dependencies between data sets. Neurocomputing, 72:39?46, 2008. [12] M. Kuss and T. Graepel. The geometry of kernel canonical correlation analysis. Technical Report TR-108, Max Planck Institute for Biological Cybernetics, T?ubingen, Germany, 2003. [13] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. In Neural Information Processing Systems, 2006. [14] G. Leen. Context assisted information extraction. PhD thesis, University the of West of Scotland, University of the West of Scotland, High Street, Paisley PA1 2BE, Scotland, 2008. [15] R. Navaratnam, A. Fitzgibbon, and R. Cipolla. The Joint Manifold Model for Semi-supervised Multivalued Regression. In International Conference on Computer Vision, Rio, Brazil, October 2007. [16] B. Olshausen and D. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607?609, 1996. [17] A. Quattoni, X. Carreras, M. Collins, and T. Darrell. An efficient projection for l1,infinity regularization. In International Conference on Machine Learning, 2009. [18] A. Quattoni, M. Collins, and T. Darrell. Transfer learning for image classification with sparse prototype representations. In Conference on Computer Vision and Pattern Recognition, 2008. [19] G. Rogez, J. Rihan, S. Ramalingam, C. Orrite, and P. Torr. Randomized Trees for Human Pose Detection. In Conference on Computer Vision and Pattern Recognition, 2008. [20] M. Salzmann, C.-H. Ek, R. Urtasun, and T. Darrell. Factorized orthogonal latent spaces. In International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, May 2010. [21] A. P. Shon, K. Grochow, A. Hertzmann, and R. P. N. Rao. Learning shared latent structure for image synthesis and robotic imitation. In Neural Information Processing Systems, pages 1233?1240, 2006. [22] L. Sigal and M. J. Black. Humaneva: Synchronized video and motion capture dataset for evaluation of articulated human motion. Technical Report CS-06-08, Brown University, 2006. [23] L. Sigal, R. Memisevic, and D. J. Fleet. Shared kernel information embedding for discriminative inference. In Conference on Computer Vision and Pattern Recognition, 2009. [24] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning. The Journal of Machine Learning Research, 7:1531?1565, 2006. [25] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267?288, 1996. [26] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B, 68:49?67, 2006. 9
3953 |@word proceeded:1 private:34 briefly:2 norm:14 nd:15 confirms:1 seek:2 accounting:1 tr:1 reduction:1 contains:3 series:2 salzmann:2 outperforms:2 existing:6 recovered:5 z2:1 chicago:1 remove:2 designed:1 drop:1 generative:5 discovering:1 intelligence:2 phog:10 scotland:3 ith:2 provides:1 detecting:1 location:1 yuan:1 consists:2 introduce:1 multi:7 inspired:4 relying:1 automatically:3 encouraging:1 totally:1 provided:1 discover:2 notation:1 estimating:1 factorized:9 z:1 finding:3 unobserved:1 grochow:1 guarantee:1 berkeley:3 rihan:1 act:1 k2:2 planck:1 before:1 local:1 consequence:1 black:1 chose:1 studied:1 co:2 archambeau:1 factorization:15 limited:1 averaged:1 camera:1 testing:1 practice:3 fitzgibbon:1 projection:2 pre:1 selection:2 strelow:1 context:3 influence:3 writing:1 equivalent:2 imposed:1 demonstrated:1 straightforward:1 convex:12 formulate:1 unstructured:1 embedding:7 notion:2 coordinate:1 brazil:1 us:2 lanckriet:1 recognition:5 particularly:1 walking:7 observed:3 capture:2 cycle:2 sonnenburg:1 icsi:1 mentioned:1 intuition:2 skeleton:1 hertzmann:1 trained:1 depend:2 solving:9 purely:1 easily:1 joint:5 multimodal:2 represented:2 kaski:1 regularizer:8 surrounding:1 train:1 separated:1 articulated:1 effective:3 artificial:2 sc:4 whose:1 valued:2 reconstruct:2 favor:1 ability:2 statistic:3 gp:15 emergence:1 advantage:3 sequence:3 took:3 propose:4 reconstruction:5 interaction:2 product:1 jia1:1 loop:1 olkopf:1 exploiting:3 darrell:4 tti:1 pose:25 bosch:1 nearest:2 eq:1 recovering:1 c:1 synchronized:2 closely:1 correct:1 human:15 generalization:1 biological:1 extension:1 assisted:1 sufficiently:1 considered:4 ground:2 lawrence:2 mapping:2 predict:1 dictionary:37 adopt:1 smallest:1 estimation:12 jiayq:1 outperformed:2 label:1 sensitive:1 individually:1 grouped:1 vice:2 clearly:1 gaussian:4 aim:4 rather:1 shrinkage:1 factorizes:1 encode:3 rank:2 contrast:1 rmf:4 baseline:2 sense:1 rio:1 inference:8 dependent:1 membership:1 nn:10 typically:3 eliminate:1 quasi:1 germany:1 issue:2 among:8 classification:4 proposes:1 art:3 special:4 initialize:2 uc:1 field:2 having:4 ng:3 extraction:1 manually:1 represents:2 future:1 minimized:1 report:3 employ:2 randomly:2 simultaneously:3 neurocomputing:1 individual:7 replaced:1 geometry:1 detection:1 highly:1 evaluation:2 introduces:1 behind:1 regularizers:1 accurate:3 encourage:6 orthogonal:1 tree:2 initialized:1 re:3 circle:1 instance:1 column:10 modeling:2 rao:1 norm1:1 addressing:2 subset:13 entry:15 kq:3 uniform:1 successful:2 dependency:5 eec:2 international:6 randomized:2 memisevic:1 probabilistic:3 lee:1 synthesis:1 squared:8 ambiguity:3 thesis:1 containing:2 choose:1 opposed:1 ek:3 toy:5 account:5 sinusoidal:1 coding:16 summarized:1 coefficient:1 explicitly:4 performed:1 view:92 helped:1 doing:1 recover:1 relied:1 complicated:1 square:1 formed:1 accuracy:1 descriptor:1 variance:3 yield:7 correspond:1 ka1:1 generalize:2 dealt:1 metaxas:1 fern:1 none:7 worth:1 pcas:1 cybernetics:1 kuss:1 explain:1 quattoni:2 trevor:2 definition:2 against:2 frequency:1 naturally:2 associated:1 di:2 sampled:2 dataset:2 proved:1 knowledge:1 multivalued:1 dimensionality:4 graepel:1 humaneva:3 jenatton:1 higher:1 dt:2 supervised:3 zisserman:1 formulation:3 done:2 evaluated:2 leen:1 furthermore:8 correlation:3 lack:1 grows:1 olshausen:1 usa:1 k22:2 concept:1 brown:1 regularization:5 deal:2 white:1 sin:3 encourages:4 essence:1 ambiguous:1 be1:1 trying:1 ramalingam:1 multiview:2 demonstrate:1 performs:1 l1:11 motion:3 image:28 novel:1 common:2 exponentially:1 belong:1 interpretation:2 discussed:1 extend:1 versa:2 imposing:1 ai:2 munoz:1 paisley:1 similarly:2 afer:1 carreras:1 nonredundant:1 recent:2 showed:1 own:1 optimizing:1 italy:2 forcing:1 scenario:1 ubingen:1 came:1 seen:3 additional:4 care:1 impose:1 determine:1 redundant:2 kdt:1 signal:9 semi:2 multiple:13 mix:1 technical:3 cross:1 bach:4 lin:11 va:3 regression:7 vision:9 iteration:1 kernel:10 represent:5 achieved:2 cell:1 whereas:2 separately:2 addressed:2 interval:1 source:1 modality:6 sch:2 rest:1 klami:1 subject:2 tend:1 member:1 spirit:1 effectiveness:2 jordan:2 leverage:1 noting:1 bengio:1 embeddings:1 lasso:3 opposite:1 idea:2 prototype:1 orrite:1 fleet:1 whether:1 pca:7 york:1 se:1 involve:2 tune:1 canonical:3 notice:1 estimated:1 correctly:6 per:2 tibshirani:1 write:1 hyperparameter:1 group:15 independency:3 threshold:1 yangqing:1 kept:2 relaxation:1 year:1 enforced:1 throughout:1 pa1:1 separation:1 geiger:1 summarizes:1 submatrix:1 cca:7 followed:1 constraint:4 infinity:1 constrain:1 generates:1 speed:1 simulate:1 min:8 formulating:1 structured:24 department:1 alternate:1 combination:4 belonging:1 kd:1 across:8 beneficial:1 smaller:2 slightly:1 battle:1 lp:1 kakp:1 fols:13 explained:1 projecting:1 taken:2 kdi:1 discus:1 fail:1 describing:1 singer:1 end:5 available:4 hierarchical:2 enforce:2 existence:1 original:1 denotes:1 remaining:3 top:1 graphical:1 exploit:5 concatenated:3 k1:9 society:2 kanaujia:1 objective:1 move:1 added:1 intend:1 strategy:2 receptive:1 rt:10 usual:3 subspace:5 separate:1 mapped:1 concatenation:3 street:1 manifold:1 urtasun:2 code:1 modeled:1 index:3 relationship:1 retained:1 minimizing:1 unfortunately:1 october:1 statement:1 unknown:1 perform:1 allowing:1 observation:9 discarded:2 enabling:1 gplvm:6 defining:1 variability:2 discovered:1 varied:1 ttic:1 ka2:1 cast:1 required:1 z1:1 optimized:1 california:1 smo:1 learned:8 alternately:4 address:3 below:1 usually:1 pattern:5 sparsity:24 max:2 royal:2 video:2 natural:3 rely:2 regularized:1 residual:1 raina:1 representing:3 eye:1 mathieu:1 conic:1 deemed:1 fro:2 text:2 review:2 literature:2 prior:1 sardinia:2 relative:1 ynoise:1 embedded:1 fully:1 expect:2 proven:5 validation:1 zeroed:2 displaying:1 sigal:2 cd:3 row:12 last:2 allow:2 institute:1 neighbor:2 taking:2 face:1 sparse:21 dimension:21 doesn:1 reconstructed:1 keep:1 global:2 robotic:1 factorize:2 discriminative:1 imitation:1 factorizing:2 continuous:1 latent:71 why:1 table:8 learn:4 nature:1 transfer:1 inherently:1 forest:1 sminchisescu:1 mse:3 rogez:1 domain:3 tween:1 did:2 dense:1 whole:1 motivation:1 noise:11 allowed:1 jogging:6 fig:13 west:2 depicts:2 ny:1 fails:1 pv:2 pereira:1 specific:2 showing:1 list:1 grouping:2 burden:1 incorporating:1 workshop:2 adding:1 effectively:1 phd:1 kx:6 suited:1 depicted:5 simply:1 expressed:1 contained:1 partially:1 shon:1 cipolla:1 corresponds:2 kan:1 truth:2 relies:1 acm:1 obozinski:1 goal:1 viewed:1 rbf:9 shared:53 price:1 specifically:1 determined:1 torr:3 principal:1 total:1 duality:1 experimental:1 meaningful:1 atsch:1 formally:2 arises:1 collins:2 reg:3 tested:1 avoiding:1 correlated:4
3,261
3,954
Learning Kernels with Radiuses of Minimum Enclosing Balls Guangyun Chen Changshui Zhang Kun Gai State Key Laboratory on Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology (TNList) Department of Automation, Tsinghua University, Beijing 100084, China {gaik02, cgy08}@mails.thu.edu.cn, [email protected] Abstract In this paper, we point out that there exist scaling and initialization problems in most existing multiple kernel learning (MKL) approaches, which employ the large margin principle to jointly learn both a kernel and an SVM classifier. The reason is that the margin itself can not well describe how good a kernel is due to the negligence of the scaling. We use the ratio between the margin and the radius of the minimum enclosing ball to measure the goodness of a kernel, and present a new minimization formulation for kernel learning. This formulation is invariant to scalings of learned kernels, and when learning linear combination of basis kernels it is also invariant to scalings of basis kernels and to the types (e.g., L1 or L2 ) of norm constraints on combination coefficients. We establish the differentiability of our formulation, and propose a gradient projection algorithm for kernel learning. Experiments show that our method significantly outperforms both SVM with the uniform combination of basis kernels and other state-of-art MKL approaches. 1 Introduction In the past years, kernel methods, like support vector machines (SVM), have achieved great success in many learning problems, such as classification and regression. For such tasks, the performance strongly depends on the choice of the kernels used. A good kernel function, which implicitly characterizes a suitable transformation of input data, can greatly benefit the accuracy of the predictor. However, when there are many available kernels, it is difficult for the user to pick out a suitable one. Kernel learning has been developed to jointly learn both a kernel function and an SVM classifier. Chapelle et al. [1] present several principles to tune parameters in kernel functions. In particular, when the learned kernel is restricted to be a linear combination of multiple basis kernels, the problem of learning the combination coefficients as well as an SVM classifier is usually called multiple kernel learning (MKL). Lanckriet et al. [2] formulate the MKL problem as a quadratically constrained quadratic programming problem, which implicitly uses an L1 norm constraint to promote sparse combinations. To enhance the computational efficiency, different approaches for solving this MKL problem have been proposed using SMO-like strategies [3], semi-infinite linear program [4], gradient-based methods [5], and second-order optimization [6]. Some other subsequent work explores more generality of multiple kernel learning by promoting non-sparse [7, 8] or group-sparse [9] combinations of basis kernels, or using other forms of learned kernels, e.g., a combination of an exponential number of kernels [10] or nonlinear combinations [11, 12, 13]. Most existing MKL approaches employ the objective function used in SVM. With an acceptable empirical loss, they aim to find the kernel which leads to the largest margin of the SVM classifier. However, despite the substantial progress in both the algorithmic design and the theoretical understanding for the MKL problem, none of the approaches seems to reliably outperform baseline 1 methods, like SVM with the uniform combination of basis kernels [13]. As will be shown in this paper, the large margin principle used in these methods causes the scaling problem and the initialization problem, which can strongly affect final solutions of learned kernels as well as performances. It implicates that the large margin preference can not reliably result in a good kernel, and thus the margin itself is not a suitable measure of the goodness of a kernel. Motivated by the generalization bounds for SVM and kernel learning, we use the ratio between the margin of the SVM classifier and the radius of the minimum enclosing ball (MEB) of data in the feature space endowed with the learned kernel as a measure of the goodness of the kernel, and propose a new kernel learning formulation. Our formulation differs from the radius-based principle by Chapelle et al. [1]. Their principle is sensitive to kernel scalings when a nonzero empirical loss is allowed, also causing the same problems as the margin-based formulations. We prove that our formulation is invariant to scalings of learned kernels, and also invariant to initial scalings of basis kernels and to the types (e.g., L1 or L2 ) of norm constraints on kernel parameters for the MKL problem. Therefore our formulation completely addresses the scaling and initialization problems. Experiments show that our approach gives significant performance improvements both over SVM with the uniform combination of basis kernels and over other state-of-art kernel learning methods. Our proposed kernel learning problem can be reformulated to a tri-level optimization problem. We establish the differentiability of a general family of multilevel optimization problems. This enables us to generally tackle the radius of the minimal enclosing ball, or other complicated optimal value functions, in the kernel learning framework by simple gradient-based methods. We hope that our results will also benefit other learning problems. The paper is structured as follows. Section 2 shows problems in previous MKL formulations. In Section 3 we present a new kernel learning formulation and give discussions. Then, we study the differentiability of multilevel optimization problems and give an efficient algorithm in Section 4 and Section 5, respectively. Experiments are shown in Section 6. Finally, we close with a conclusion. 2 Measuring how good a kernel is Let D = {(x1 , y1 ), ..., (xn , yn )} denote a training set of n pairs of input points xi ? X and target labels yi ? {?1}. Suppose we have a kernel family K = {k : X ? X ? R}, in which any kernel function k implicitly defines a transformation ?(?; k) from the input space X to a feature space by k(xc , xd ) = h?(xc ; k), ?(xd ; k)i. Let a classifier be linear in the feature space endowed with k, as f (x; w, b, k) = h?(x; k), wi + b, (1) the sign of which is used to classify data. The task of kernel learning (for binary classification) is to learn both a kernel function k ? K and a classifier w and b. To make the problem trackable, the learned kernel is usually restricted to a parametric form k (?) (?, ?), where ? = [?i ]i is the kernel parameter. Then the problem of learning a kernel transfers to the problem of learning a kernel parameter ?. The most common used kernel form is a linear combination of multiple basis kernels, as Pm k (?) (?, ?) = j=1 ?j kj (?, ?), ?j ? 0. (2) 2.1 Problems in multiple kernel learning Most existing MKL approaches, e.g., [2, 4, 5], employ the equivalent objective function as in SVM: P mink,w,b,?i 12 kwk2 + C i ?i , s.t. yi f (xi ; w, b, k) + ?i ? 1, ?i ? 0, (3) where ?i is the hinge loss. This problem can be reformulated to ? mink : G(k), P 1 2 ? where G(k) = minw,b,?i 2 kwk + C i ?i , s.t. yi f (xi ; w, b, k) + ?i ? 1, ?i ? 0. (4) (5) For any kernel k, the optimal classifier w and b is actually the SVM classifier with the kernel k. Let ? denote the margin of the SVM classifier in the feature space endowed with k. We have ? ?2 = kwk2 . Thus the term kwk2 makes formulation (3) prefer the kernel that results in an SVM classifier with a larger margin (as well as an acceptable empirical loss). Here, a natural question is that for different kernels whether the margins of SVM classifiers can well measure the goodness of the kernels. 2 To answer this question, we consider what happens when a kernel k is enlarged ? by a scalar a: k new = ak, where a > 1. The corresponding transformations satisfy ?(?; k new ) = ? a?(?; k). For k, let {w? , b? } denote the optimal solution of (5). For k new , we set w2 = w1? / a and b2 = b?1 , then we have kw2 k2 = kw1? k2 /a, and f (x; w2 , b2 , k new ) and f (x; w1? , b?1 , k) are the same classifier, ? ? new ) ? 1 kw2 k2 + C P ?i < 1 kw? k2 + resulting in the same ?i . Then we obtain: G(ak) = G(k 1 i 2 2 P ? C i ?i = G(k), which means the enlarged kernel gives a larger margin and a smaller objective value. As a consequence, on one hand, the large margin preference guides the scaling of the learned kernel to be as large as possible. On the other hand, any kernel, even the one resulting in a bad performance, can give an arbitrarily large margin by enlarging its scaling. This problem is called the scaling problem. It shows that the margin is not a suitable measure of the goodness of a kernel. In the linear combination case, the scaling problem causes that the kernel parameter ? does not converge in the optimization. A remedy is to use a norm constraint on ?. However, it has been shown in recent literature [7, 9] that different types of norm constraints fit different data sets. So users face the difficulty of choosing a suitable norm constraint. Even after a norm constraint is selected, the scaling problem also causes another problem about the initialization. Consider an L1 norm constraint and a learned kernel which is a combination of two basis kernels, as k (?) (?, ?) = ?1 k1 (?, ?) + ?2 k2 (?, ?), ?1 , ?2 ? 0, ?1 + ?2 = 1. (6) To leave the empirical loss out of consideration, assume: (a) both k1 and k2 can lead to zero empirical loss, (b) k1 results in a larger margin than k2 . For simplicity, we further restrict ?1 and ?2 to be equal to 0 or 1, to enable kernel selection. The MKL formulation (3), of course, will choose k1 from {k1 , k2 } due to the large margin preference. Then we set k1new (?, ?) = ak1 (?, ?), where a is a small scalar to make that k1new has a smaller margin than k2 . After k1new substitutes for k1 , the MKL formulation (3) will select k2 from {k1new , k2 }. The example shows that the final solution can be greatly affected by the initial scalings of basis kernels, although a norm constraint is used. This problem is called the initialization problem. When the MKL framework is extended from the linear combination cases to the nonlinear cases, the scaling problem becomes more serious, as even a finite scaling of the learned kernel may not be generally guaranteed by a simple norm constraint on kernel parameters for some kernel forms. These problems implicate that the margin itself is not enough to measure the goodness of kernels. 2.2 Measuring the goodness of kernels with the radiuses of MEB Now we need to find a more reasonable way to measure the goodness of kernels. Below we introduce the generalization error bounds for SVM and kernel learning, which inspire us to consider the minimum enclosing ball to learn a kernel. For SVM with a fixed kernel, it is well known that the estimation error, p which denotes the gap between the expected error and the empirical error, is bounded by O(R2 ? ?2 )/n, where R is the radius of the minimum enclosing ball (MEB) of data in the feature space endowed with the kernel used. For SVM with a kernel learned from a kernel family K, if we restrict that the radius of the minimum enclosing ball in the feature space endowed with the learned kernel to be no larger than R, then the theoretical results of Srebro and Ben-David [14] say: for any fixed margin ? > 0 and any fixed radius R > 0, with 1 ? ? over a training set of size n, the estimation error is q probability at least 2 en? 8 128en3 R2 128nR2 ? log ?). Scalar d? denotes no larger than n (2 + d? log ? 2 d? + 256 R ? 2 log 8R log ?2 the pseudodimension [14] of the kernel family K. For example, d? of linear combination kernels is no larger than the number of basis kernels, and d? of the Gaussian kernels with a form of a b 2 k (?) (xa , xb ) = e??kx ?x k is no larger than 1 (See [14] for more details). The above results clearly state that the generalization error bounds for SVM with both fixed kernels and learned kernels depend on the ratio between the margin ? and the radius R of the minimum enclosing ball of data. Although some new results of the generalization bounds for kernel learning, like [15], give different types of dependencies on d? , they also rely on the margin-and-radius ratio. In SVM with a fixed kernel, the radius R is a constant and we can safely minimize kwk2 (as well as the empirical loss). However, in kernel learning, the radius R changes drastically from one kernel to another (An example Pp is given in the supplemental materials: when we uniformly combine p basis kernels by kunif = j=1 p1 kj , the squared radius becomes only p1 of the squared radius of each basis kernel.). Thus we should also take the radius into account. As a result, we use the ratio between the margin ? and the radius R to measure how good a kernel is for kernel learning. 3 Given any kernel k, the radius of the minimum enclosing ball, denoted by R(k), can be obtained by: R2 (k) = miny,c y, s.t. y ? k?(xi ; k) ? ck2 . (7) This problem is a convex minimization problem, being equivalent to its dual problem, as P P P R2 (k) = max?i s.t. i ?i k(xi , xi ) ? i,j ?i k(xi , xj )?j , i ?i = 1, ?i ? 0, 2 2 (8) 2 which shows a property of R (k): for any kernel k and any scalar a > 0, we have R (ak) = aR (k). 3 Learning kernels with the radiuses Considering the ratio between the margin and the radius of MEB, we propose a new formulation, as P mink,w,b,?i 21 R2 (k)kwk2 + C i ?i , s.t. yi (h?(xi ; k), wi + b) + ?i ? 1, ?i ? 0, (9) 2 where R2 (k)kwk is a radius-based regularizer that prefers a large ratio between the margin and the P radius, and i ?i is the hinge loss which is an upper bound of empirical misclassified error. This optimization problem is called radius based kernel learning problem, referred to as RKL. Chapelle et al. [1] also utilize the radius of MEB to tune kernel parameters for hard margin SVM. Our formulation (9) is equivalent to theirs if ?i is restricted to be zero. To give a soft margin version, they modify the kernel matrix K(?) = K(?) + C1 I, resulting in a formulation equivalent to: P min 12 R2 (k (?) )kwk2 + CR2 (k (?) ) i ?i2 , s.t. yi (h?(xi ; k (?) ), wi + b) + ?i ? 1, ?i ? 0. (10) ?,w,b,?i The function R2 (k (?) ) in the second term, which may become small, makes that minimizing the objective function can not reliably give a small empirical loss, even when C is large. Besides, when ? we reduce the scaling of a kernel by multiplying it with a small scalar a and substitute w ? = w/ a for w to keep the same ?i , the objective function always decreases (due to the decrease of R2 in the empirical loss term), still leading to scaling problems. Do et al. [16] recently propose to learn a linear kernel combination, as defined in (2), through P P kw k2 P min 12 j ?jj + P ?jCR2 (kj ) i ?i2 , s.t. yi ( j hwj , ?(xi ; kj )i + b) + ?i ? 1, ?i ? 0. (11) ?,wj ,b,?i j Their objective function also can be always decreased by multiplying ? with a large scalar. Thus their method does not address the scaling problem, also resulting in the initialization problem. If we initially adjust the scalings of basis kernels to make each R(kj ) be equal to each other, then their formulation is equivalent to the margin-based formulation (3). Different from the above formulations, our formulation (9) is invariant to scalings of kernels. 3.1 Invariance to scalings of kernels Now we discuss the properties of formulation (9). The RKL problem can be reformulated to where G(k) = min 1 R2 (k)kwk2 w,b,?i 2 +C mink G(k), i ?i , s.t. yi (h?(xi ; k), wi + b) + ?i ? 1, ?i ? 0. P (12) (13) Functional G(k) defines a measure of the goodness of kernel functions, which consider a trade-off between the margin-and-radius ratio and the empirical loss. This functional is invariant to the scaling of k, as stated by the following proposition. Proposition 1. For any kernel k and any scalar a > 0, equation G(ak) = G(k) holds. Proof. For the scaled kernel ak, equation R2 (ak) = aR2 (k) holds. Thereby, we get P ? G(ak) = minw,b,?i a2 R2 (k)kwk2 + C i ?i , s.t. yi (h a?(xi ; k), wi + b) + ?i ? 1, ?i ? 0. (14) Let w ? ? a = w replace w in (14), and then (14) becomes equivalent to (13). Thus G(ak) = G(k). . For a parametric kernel form k (?) , the RKL problem transfers to minimizing a function g(?) = G(k (?) ). Here we temporarily focus on the linear combination case defined by (2), and use glinear (?) to denote g(?) in such case. Due to the scaling invariance, for any ? and any a > 0, we have glinear (a?) = glinear (?). It makes the problem of minimizing glinear (?) be invariant to the types of norm constraints on ?, as stated in the following. 4 Proposition 2. Given any norm definition N (?) and any set S ? R, suppose there exists c > 0 that satisfies c ? S. Let (a) denote the problem of minimizing glinear (?) s.t. ?i ? 0, and (b) denote the problem of minimizing glinear (?) s.t. ?i ? 0 and N (?) ? S. Then we have: (1) For any local c a (global) optimal solution of (a), denoted by ?a , N (? is also the local (global) optimal solution a) ? of (b). (2) For any local (global) optimal solution of (b), denoted by ?b , ?b is also the local (global) optimal solution of (a). Proof. The complete proof is given in the the supplemental materials. Here we only prove the equivalence of global optimal solutions of (a) and (b). On one hand, if ?a is the global optimal solution of (a), then for any ? that satisfies ?i ? 0 and N (?) ? S, we have glinear ( N c(?) ?a ) = c c a a also satisfies the constraint of (b), glinear (?a ) ? g(?). Due to N ( N (? a ) ? ) = c ? S, N (?) ? c a and thus N (?) ? is the global optimal solution of (b). On the other hand, for any ? (?i ? 0), glinear ( N c(?) ?) = glinear (?) due to the scaling invariance. If ?b is the global optimal solution of (b), c ?), then for any ? (?i ? 0), as N c(?)? satisfies the constraint of (b), we have glinear (?b ) ? glinear (N (?) b b giving glinear (? ) ? glinear (?). Thus ? is the global optimal solution of (a). As the problems of minimizing glinear (?) under different types of norm constraints on ? are all equivalent to the same problem without any norm constraint, they are equivalent to each other. Based on the above proposition, we can also get the another conclusion: in the linear combination case the minimization problem (12) is also invariant to the initial scalings of basis kernels (see below). Proposition 3. Let kj denote basis kernels, and aj > 0 be initial scaling coefficients of basis kernels. Give a norm constraint N (?) ? PS, which is by the same definition as in Proposition 2. Let (a) denote the problem of minimizing G( j ?j kj ) w.r.t. ? s.t. ?i ? 0 and N (?) ? S, and (b) denote the problem P with different initial scalings: minimizing G( j ?j aj kj ) w.r.t. ? s.t. ?i ? 0 and N (?) ? S. Then: (1) Problem (a) and problem (b) have the same local and global optimums. (2) For any local (global) caj ? b optimal solution of (b), denoted by ?b , [ N ([at ?jb ]t ) ]j is also the local (global) optimal solution of (a). t Proof. By P proposition 2, problems (b) is equivalent to the one without any norm constraint: minimizing G( j ?j aj kj ) w.r.t. ? s.t. ?i ? 0, which is denoted by problem (c). Let ??j = aj ?j , and then P problem (c) is equivalent to the problem of minimizing G( j ??j kj ) w.r.t. ?? s.t. ??i ? 0, which is denoted by problem (d) (local and global optimal solutions of problems (c) and (d) have one-to-one correspondences due to the simple transform ??j = aj ?j ). Again, by Proposition 2, problem (d) is equivalent to the one with N (?) ? S, which is indeed problem (a). So we have conclusion (1). By proper transformations of optimal solutions of these equivalent problems, we get conclusion (2). Note that in Proposition 3, optimal solutions of problems (a) and (b), which are with different initial scalings of basis kernels, actually result in the same kernel combinations up to the scalings. As shown in the above three propositions, our proposed formulation not only completely addresses scaling and initialization problems, but also is not sensitive to the types of norm constraints used. 3.2 Reformulation to a tri-level optimization problem The remaining task is to optimize the RKL problem (12). Given a parametric kernel form k (?) , for any parameter ?, to obtain the value of the objective function g(?) = G(k (?) ) in (12), we need to solve the SVM-like problem in (13), which is a convex minimization problem and can be solved by its dual problem. Indeed, the whole RKL problem is transformed to a tri-level optimization problem: min? g(?), (15) n o P P P 1 where g(?) = max?i i ?i ? 2r2 (?) i,j ?i ?j yi yj Ki,j (?), s.t. i ?i yi = 0, 0 ? ?i ? C ,(16) o n P P P where r2 (?) = max?i i ?i Ki,j (?) ? i,j ?i Ki,j (?)?j , s.t. i ?i = 1, ?i ? 0 . (17) Notation K(?) denotes the kernel matrix [k (?) (xi , xj )]i,j . The above formulations show that given any ? the calculation of a value of g(?) requires solving a bi-level optimization problem. First, solve the MEB dual problem (17), and obtain the optimal value r2 (?) and the optimal solution, denoted by 5 ?i? . Then, take r2 (?) into the objective function of the SVM dual problem (16), solve it, and obtain the value of g(?), as well as the optimal solution of (16), denoted by ?i? . Unlike in other kernel learning approaches, here the optimization of the SVM dual problem relies on another optimal value function r2 (?), making the RKL problem more challenging. If g(?), which is the objective function in the top-level optimization, is differentiable and we can get its derivatives, then we can use a variety of gradient-based methods to solve the RKL problem. So in next section, we study the differentiability of a general family of multilevel optimization problems. 4 Differentiability of the multilevel optimization problem The Danskin?s theorem [17] states the differentiability of the optimal value of a single-level optimization problem, and has been applied in many MKL algorithms, e.g., [5, 12]. Unfortunately, it is not directly applicable to the optimal value of a multilevel optimization problem. Below we generalize the Danskin?s theorem and give new results about the multilevel optimization problem. Let Y be a metric space, and X, U and Z be normed spaces. Suppose: (1) The function g1 (x, u, z), is continuous on X ? U ? Z. (2) For all x ? X the function g1 (x, ?, ?) is continuously differentiable. (3) The function g2 (y, x, u) (g2 : Y ? X ? U ? Z) is continuous on Y ? X ? U . (4) For all y ? Y the function g2 (y, ?, ?) is continuously differentiable. (5) Sets ?X ? X and ?Y ? Y are compact. By these notations, we propose the following theorem about bi-level optimal value functions. Theorem 1. Let us define a bi-level optimal value function as v1 (u) = inf x??X g1 (x, u, v2 (x, u)), (18) where v2 (x, u) is another optimal value function as v2 (x, u) = inf y??Y g2 (y, x, u). (19) If for any x and u, g2 (?, x, u) has a unique minimizer y ? (x, u) over ?Y , then y ? (x, u) are continuous on X ? U , and v1 (u) is directionally differentiable. Furthermore, if for any u, the g1 (?, u, v2 (?, u)) has also a unique minimizer x? (u) over ?X , then 1. the minimizer x? (u) are continuous on U , 2. v1 (u) is continuously differentiable, and its derivative is equal to   ? ? dv1 (u) ,u) ?g1(x?,u,v2 ) ,u) ?g1(x?,u,v2 ) , where ?v2 (x + ?v2(x = du = ?u ?u ?v2 ?u v2 =v2 (x?,u) ?g2 (y ? ,x? ,u) . ?u (20) The proof is given in supplemental materials. To apply Theorem 1 to the objective function g(?) in the RKL problem (15), we shall make sure the following two conditions are satisfied. First, both the MEB dual problem (17) and the SVM dual problem (16) must have unique optimal solutions. This can be guaranteed by that the kernel matrix K(?) is strictly positive definite. Second, the kernel matrix K(?) shall be continuously differentiable to ?. Both conditions can be met in the linear combination case when each basis kernel matrix is strictly positive definite, and can also be easily satisfied in nonlinear cases, like in [11, 12]. If these two conditions are met, then g(?) is continuously differentiable and 2 P P dKi,j (?) dg(?) 1 ? ? + 2r41(?) i,j ?i? ?j? yi yj Ki,j (?) drd?(?) , (21) i,j ?i ?j yi yj d? = ? 2r 2 (?) d? where ?i? is the optimal solution of the SVM dual problem (16), and P P dKi,i (?) dKi,j (?) ? dr 2 (?) = i ?i? d? ? i,j ?i? d? ?j , d? (22) where ?i? is the optimal solution of the MEB dual dKi,j (?) is needed. It depends on the specific form d? problem (17). In above equations, the value of of the parametric kernels, and the deriving of it P ?K (?) m is easy. For example, for the linear combination kernel Ki,j (?) = m ?m Ki,j , we have ??i,jm = 2 m Ki,j . For the Gaussian kernel Ki,j (?) = e??kxi ?xj k , we have 5 dKi,j (?) d? = ?Ki,j (?)kxi ? xj k2 . Algorithm With the derivative of g(?), we use the standard gradient projection approach with the Armijo rule [18] for selecting step sizes to address the RKL problem. To compare with the most popular kernel learning algorithm, simpleMKL [5], in experiments we employ the linear combination 6 kernel form with nonnegative combination coefficients, as defined in (2). In addition, we also consider three types of norm constraints on kernel parametersP (combination coefficients): L1 , L2 and P no norm constraint. The L1 and L2 norm constraints are as j ?j = 1 and j ?j2 = 1, respectively. The projection for the L1 norm and nonnegative constraints can be efficiently done by the method of Duchi et al. [19]. The projection for only nonnegative constraints can be accomplished by setting negative elements to be zero. The projection for the L2 norm and nonnegative constraints need another step after eliminating negative values: normalize ? by multiplying it with k?k?1 2 . In our gradient projection algorithm, each calculation of the objective functions g(?) needs solving an MEB problem (17) and an SVM problem (16), whereas the gradient calculation and projection steps have ignorable time complexity compared to MEB and SVM solvers. The MEB and SVM problems have similar forms of objective functions and constraints, and both of them can be efficiently solved by SMO algorithms. Moreover, previous solutions ?i? and ?i? can be used as ?hotstart? to accelerate the solvers. It is because optimal solutions of two problems are continuous to kernel parameter ? according to Theorem 1. Thus when ? moves a small step, the optimal solutions also will only change a little. In real experiments our approach usually achieves approximate convergence within one or two dozens of invocations of SVM and MEB solvers (For lack of space, examples of the convergence speed of our algorithm are shown in the supplemental materials). In linear combination cases, the RKL problem, as the radius-based formulation by Chapelle et al. [1], is not convex. Gradient-based methods only guarantee local optimums. The following states the nontrivial quality of local optimal solutions and their connections to related convex problems. Proposition 4. In linear combination cases, for any local optimal solution of the RKL problem, denoted by ?? , there exist C1 > 0 and C2 > 0 that ?? is the global optimal solution of the following convex problem: P P P min 12 j kwj k2 + C1 r2 (?) + C2 i ?i2 , s.t. yi ( j hwj , ?(xi ; ?j kj )i+b)+?i ? 1, ?i ? 0. (23) ?,wj ,b,?i The proof can be found in the supplemental materials. The proposition also gives another possible way to address the RKL problem: iteratively solve the convex problem (23) with a search for C1 and C2 . However, it is difficult to find exact values of C1 and C2 by a grid search, and even a rough search will result in too high computational load. Besides, such method is also lack of extension ability to nonlinear parametric kernel forms. Then, in the experiments, we demonstrate that the gradient-based approach can give satisfactory performances, which are significantly better than ones of SVM with the uniform combination of basis kernels and of other kernel learning approaches. 6 Experiments In this section, we illustrate the performances of our presented RKL approach, in comparison with SVM with the uniform combination of basis kernels (Unif), the margin-based MKL method using formulation (3) (MKL), and the kernel learning principle by Chapelle et al. [1] using formulation (10) (KL-C). The evaluation is made on eleven public available data sets from UCI repository [20] and LIBSVM Data [21] (see Table 1). All data sets have been normalized to be zero-means and unit-variances on every feature. The used basis kernels are the same as in SimpleMKL [5]: 10 Gaussian kernels with bandwidths ?G ? {0.5, 1, 2, 5, 7, 10, 12, 15, 17, 20} and 10 polynomial kernels of degree 1 to 10. All kernel matrices have been normalized to unit trace, as in [5, 7]. Note that although our RKL formulation is theoretically invariant to the initial scalings, the normalization is still applied in RKL to avoid numerical problems caused by large value kernel matrices in SVM and MEB solvers. To show impacts of different norm constraints, we use three types of them: L1 , L2 and no norm constraint. With no norm constraint, only RKL can converge, and so only its results are reported. The SVM toolbox used is LIBSVM [21]. MKL with the L1 norm constraint is solved by the code from SimpleMKL [5]. Other problems are solved by standard gradient-projection methods, where the calculation of gradients of the MKL formulation (3) and Chapelle?s formulation (10) is 1 e, where e is an all-ones vector. the same as in [5] and [1], respectively. The initial ? is set to be 20 The trade-off coefficients C in SVM, MKL, KL-C and RKL are automatically determined by . 3-fold cross-validations on training sets. In all methods, C is selected from the set Scoef = {0.01, 0.1, 1, 10, 100}. For each data set, we split it to five parts, and each time we use four parts as the training set and the remaining one as the test set. The average accuracies with standard deviations and average numbers of selected basis kernels are reported in Table 1. 7 Table 1: The testing accuracies (Acc.) with standard deviations (in parentheses), and the average numbers of selected basis kernels (Nk). We set the numbers of our method to be bold if our method outperforms both Unif and other two kernel learning approaches under the same norm constraint. Index 1 Unif Constraint Data set Acc. Nk Ionosphere 94.0 (1.4) Splice 51.7 (0.1) Liver 58.0 (0.0) Fourclass 81.2 (1.9) Heart 83.7 (6.1) Germannum 70.0 (0.0) Musk1 61.4 (2.9) Wdbc 94.4 (1.8) Wpbc 76.5 (2.9) Sonar 76.5 (1.8) Coloncancer 67.2 (11) 20 20 20 20 20 20 20 20 20 20 20 2 MKL L1 Acc. Nk 3 KL-C L1 Acc. Nk 4 Ours L1 Acc. Nk 92.9 (1.6) 3.8 79.5 (1.9) 1.0 59.1 (1.4) 4.2 97.7 (1.2) 7.0 84.1 (5.7) 7.4 70.0 (0.0) 7.2 85.5 (2.9) 1.6 97.0 (1.8) 1.2 76.5 (2.9) 7.2 82.3 (5.6) 2.6 82.6 (8.5) 13 86.0 (1.9) 4.0 80.5 (1.9) 2.8 62.9 (3.5) 4.0 94.0 (1.2) 2.0 83.3 (5.9) 1.8 71.9 (1.8) 9.8 73.9 (2.9) 2.0 97.4 (2.3) 4.6 52.2 (5.9) 9.6 80.8 (5.8) 7.4 74.5 (4.4) 11 95.7 (0.9) 2.8 86.5 (2.4) 3.2 64.1 (4.2) 3.6 100 (0.0) 1.0 84.1 (5.7) 5.2 73.7 (1.6) 4.8 93.3 (2.3) 4.0 97.4 (1.6) 6.2 76.5 (2.9) 17 86.0 (2.6) 2.6 84.2 (4.2) 7.2 5 MKL L2 Acc. Nk 94.3 (1.5) 82.0 (2.2) 67.0 (3.8) 97.3 (1.6) 83.7 (5.8) 71.5 (0.8) 87.4 (3.0) 96.8 (1.6) 75.9 (1.8) 85.2 (2.9) 76.5 (9.0) 20 20 20 20 20 20 20 20 20 20 20 6 KL-C L2 Acc. Nk 84.4 (1.6) 74.0 (2.6) 64.1 (3.9) 94.0 (1.3) 83.3 (5.1) 71.6 (2.1) 61.9 (3.1) 97.4 (2.0) 51.0 (6.6) 80.2 (5.9) 76.0 (3.6) 18 14 11 17 19 13 19 11 17 11 15 7 Ours L2 Acc. Nk 8 Ours No Acc. Nk 95.7 (0.9) 3.0 86.5 (2.4) 2.2 64.1 (4.2) 8.0 100 (0.0) 1.0 84.4 (5.9) 5.4 73.9 (1.2) 6.0 93.5 (2.2) 3.8 97.6 (1.9) 5.8 76.5 (2.9) 15 86.0 (2.6) 2.6 84.2 (4.2) 5.6 95.7 (0.9) 3.0 86.3 (2.5) 3.2 64.3 (4.3) 6.6 100 (0.0) 1.6 84.8 (5.0) 5.8 73.9 (1.8) 5.8 93.3 (2.3) 3.8 97.6 (1.9) 5.8 76.5 (2.9) 15 86.0 (3.3) 3.0 84.2 (4.2) 7.6 The results in Table 1 can be summarized as follows. (a) RKL gives the best results on most sets. Under L1 norm constraints, RKL (Index 4) outperforms all other methods (Index 1, 2, 3) on 8 out of 11 sets, and also gives results equal to the best ones of other methods on the remaining 3 sets. In particular, RKL gains 5 or more percents of accuracies on Splice, Liver and Musk1 over MKL, and gains more than 9 percents on four sets over KL-C. Under L2 norm constraints, the results are similar: RKL (Index 7) outperforms other methods (Index 5, 6) on 10 out of 11 sets, with only 1 inverse result. (b) Both MKL and KL-C are sensitive to the types of norm constraints (Compare Index 2 and 5, as well as 3 and 6). As shown in recent literature [7, 9], for the MKL formulation, different types of norm constraints fit different data sets. However, RKL outperforms MKL (as well as KL-C) under both L1 and L2 norm constraints on most sets. (c) RKL is invariant to the types of norm constraints. See Index 4, 7 and 8. Most accuracy numbers of them are the same. Several exceptions with slight differences are possibly due to precisions of numerical computation. (d) For MKL, the L1 norm constraint always results in sparse combinations, whereas the L2 norm constraint always gives non-sparse results (see Index 2 and 5). (e) An interesting thing is that, our presented RKL gives sparse solutions on most sets, whatever types of norm constraints are used. As there usually exist redundancies in the basis kernels, the searching for good kernels and small empirical loss often directly leads to sparse solutions. We notice that KL-C under L2 norm constraints also slightly promotes sparsity (Index 6). Compared to KL-C under L2 norm constraints, RKL provides not only higher performances but also more sparsity, which benefits both interpretability and computational efficiency in prediction. 7 Conclusion In this paper, we show that the margin term used in previous MKL formulations is not a suitable measure of the goodness of kernels, resulting in scaling and initialization problems. We propose a new formulation, called RKL, which uses the ratio between the margin and the radius of MEB to learn kernels. We prove that our formulation is invariant to kernel scalings, and also invariant to scalings of basis kernels and to the types of norm constraints for the MKL problem. Then, by establishing the differentiability of a general family of multilevel optimal value functions, we propose a gradient-based algorithm to address the RKL problem. We also provide the property of solutions of our algorithm. The experiments validate that our approach outperforms both SVM with the uniform combination of basis kernels and other state-of-art kernel learning methods. Acknowledgments The work is supported by the National Natural Science Foundation of China (NSFC) (Grant Nos. 60835002 and 61075004) and the National Basic Research Program (973 Program) (No. 2009CB320602). 8 References [1] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46(1):131?159, 2002. [2] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L.E. Ghaoui, and M.I. Jordan. Learning the kernel matrix with semidefinite programming. The Journal of Machine Learning Research, 5:27?72, 2004. [3] F.R. Bach, G.R.G. Lanckriet, and M.I. Jordan. Multiple kernel learning, conic duality, and the smo algorithm. In Proceedings of the twenty-first international conference on Machine learning (ICML 2004), 2004. [4] S. Sonnenburg, G. R?atsch, and C. Sch?afer. A general and efficient multiple kernel learning algorithm. In Adv. Neural. Inform. Process Syst. (NIPS 2005), 2006. [5] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. SimpleMKL. Journal of Machine Learning Research, 9:2491?2521, 2008. [6] O. Chapelle and A. Rakotomamonjy. Second order optimization of kernel parameters. In Proc. of the NIPS Workshop on Kernel Learning: Automatic Selection of Optimal Kernels, 2008. [7] M. Kloft, U. Brefeld, S. Sonnenburg, P. Laskov, K. M?uller, and A. Zien. Efficient and Accurate lp-Norm Multiple Kernel Learning. In Adv. Neural. Inform. Process Syst. (NIPS 2009), 2009. [8] C. Cortes, M. Mohri, and A. Rostamizadeh. L2 regularization for learning kernels. In Uncertainty in Artificial Intelligence, 2009. [9] J. Saketha Nath, G. Dinesh, S. Raman, Chiranjib Bhattacharyya, Aharon Ben-Tal, and K. R. Ramakrishnan. On the algorithmics and applications of a mixed-norm based kernel learning formulation. In Adv. Neural. Inform. Process Syst. (NIPS 2009), 2009. [10] F. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In Adv. Neural. Inform. Process Syst. (NIPS 2008), 2008. [11] M. G?onen and E. Alpaydin. Localized multiple kernel learning. In Proceedings of the 25th international conference on Machine learning (ICML 2008), 2008. [12] M. Varma and B.R. Babu. More generality in efficient multiple kernel learning. In Proceedings of the 26th International Conference on Machine Learning (ICML 2009), 2009. [13] C. Cortes, M. Mohri, and A. Rostamizadeh. Learning Non-Linear Combinations of Kernels. In Adv. Neural. Inform. Process Syst. (NIPS 2009), 2009. [14] N. Srebro and S. Ben-David. Learning bounds for support vector machines with learned kernels. In Proceedings of the International Conference on Learning Theory (COLT 2006), pages 169?183. Springer, 2006. [15] Yiming Ying and Colin Campbell. Generalization bounds for learning the kernel. In Proceedings of the International Conference on Learning Theory (COLT 2009), 2009. [16] H. Do, A. Kalousis, A. Woznica, and M. Hilario. Margin and Radius Based Multiple Kernel Learning. In Proceedings of the European Conference on Machine Learning (ECML 2009), 2009. [17] J.M. Danskin. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, pages 641?664, 1966. [18] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, September 1999. [19] John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l1ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning (ICML 2008), 2008. [20] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007. http://www.ics.uci.edu/?mlearn/MLRepository.html. Software available at [21] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/?cjlin/libsvm. 9
3954 |@word repository:2 version:1 eliminating:1 polynomial:1 norm:41 seems:1 unif:3 pick:1 thereby:1 tnlist:1 initial:8 selecting:1 ours:3 bhattacharyya:1 outperforms:6 existing:3 past:1 must:1 john:1 belmont:1 subsequent:1 numerical:2 eleven:1 enables:1 intelligence:1 selected:4 ck2:1 provides:1 preference:3 zhang:1 five:1 c2:4 become:1 prove:3 combine:1 introduce:1 theoretically:1 indeed:2 expected:1 p1:2 automatically:1 little:1 jm:1 considering:1 solver:4 becomes:3 bounded:1 notation:2 moreover:1 what:1 developed:1 supplemental:5 transformation:4 guarantee:1 safely:1 every:1 tackle:1 xd:2 classifier:13 k2:14 scaled:1 whatever:1 unit:2 grant:1 yn:1 bertsekas:1 positive:2 local:11 modify:1 tsinghua:2 consequence:1 despite:1 ak:8 nsfc:1 establishing:1 simplemkl:4 initialization:8 china:2 equivalence:1 challenging:1 kw2:2 bi:3 unique:3 acknowledgment:1 yj:3 testing:1 definite:2 differs:1 empirical:12 significantly:2 projection:9 get:4 onto:1 close:1 selection:2 optimize:1 equivalent:12 www:2 normed:1 convex:6 formulate:1 simplicity:1 rule:1 deriving:1 varma:1 searching:1 target:1 suppose:3 user:2 exact:1 programming:3 us:2 lanckriet:3 element:1 ignorable:1 mukherjee:1 csie:1 solved:4 wj:2 adv:5 sonnenburg:2 decrease:2 trade:2 alpaydin:1 substantial:1 complexity:1 miny:1 cristianini:1 depend:1 solving:3 efficiency:2 basis:28 completely:2 easily:1 accelerate:1 regularizer:1 describe:1 artificial:1 newman:1 choosing:2 shalev:1 larger:7 solve:5 say:1 ability:1 saketha:1 g1:6 jointly:2 itself:3 transform:1 final:2 directionally:1 differentiable:7 brefeld:1 propose:7 causing:1 j2:1 uci:3 validate:1 normalize:1 convergence:2 p:1 optimum:2 leave:1 ben:3 yiming:1 fourclass:1 illustrate:1 liver:2 progress:1 met:2 radius:28 enable:1 material:5 public:1 multilevel:7 generalization:5 ntu:1 proposition:12 strictly:2 extension:1 exploring:1 hold:2 ic:1 great:1 algorithmic:1 achieves:1 a2:1 estimation:2 proc:1 applicable:1 label:1 sensitive:3 largest:1 changshui:1 minimization:4 hope:1 rough:1 clearly:1 uller:1 gaussian:3 always:4 aim:1 avoid:1 focus:1 improvement:1 greatly:2 baseline:1 cr2:1 rostamizadeh:2 initially:1 misclassified:1 transformed:1 classification:2 dual:9 colt:2 denoted:9 html:1 art:3 constrained:1 equal:4 kw:2 icml:4 promote:1 jb:1 intelligent:1 serious:1 employ:4 dg:1 national:3 evaluation:1 adjust:1 semidefinite:1 xb:1 accurate:1 minw:2 theoretical:2 minimal:1 classify:1 soft:1 ar:1 goodness:10 measuring:2 nr2:1 deviation:2 rakotomamonjy:2 uniform:6 predictor:1 too:1 reported:2 dependency:1 answer:1 kxi:2 trackable:1 explores:1 international:6 siam:1 kloft:1 off:2 enhance:1 continuously:5 w1:2 squared:2 again:1 satisfied:2 k1new:4 choose:1 possibly:1 dr:1 derivative:3 leading:1 dimitri:1 chung:1 syst:5 account:1 b2:2 bold:1 automation:1 coefficient:6 summarized:1 babu:1 satisfy:1 caused:1 depends:2 thu:2 characterizes:1 kwk:2 complicated:1 shai:1 asuncion:1 minimize:1 accuracy:5 variance:1 efficiently:2 generalize:1 none:1 multiplying:3 acc:9 mlearn:1 inform:5 definition:2 pp:1 proof:6 gain:2 popular:1 actually:2 campbell:1 higher:1 inspire:1 formulation:34 done:1 strongly:2 generality:2 furthermore:1 xa:1 hand:4 nonlinear:5 lack:2 mkl:28 drd:1 defines:2 quality:1 aj:5 scientific:1 pseudodimension:1 normalized:2 remedy:1 regularization:1 laboratory:2 nonzero:1 iteratively:1 i2:3 satisfactory:1 dinesh:1 mlrepository:1 complete:1 demonstrate:1 duchi:2 l1:15 percent:2 consideration:1 recently:1 common:1 functional:2 slight:1 theirs:1 kwk2:8 significant:1 dv1:1 automatic:1 grid:1 pm:1 canu:1 mathematics:1 kw1:1 chapelle:8 afer:1 zcs:1 recent:2 inf:2 binary:1 success:1 arbitrarily:1 yi:13 accomplished:1 minimum:8 converge:2 colin:1 semi:1 zien:1 multiple:14 calculation:4 cross:1 bach:3 lin:1 hwj:2 promotes:1 parenthesis:1 impact:1 prediction:1 regression:1 basic:1 metric:1 chandra:1 kernel:167 normalization:1 achieved:1 c1:5 addition:1 whereas:2 decreased:1 sch:1 w2:2 unlike:1 tri:3 sure:1 thing:1 nath:1 jordan:2 split:1 enough:1 easy:1 musk1:2 variety:1 affect:1 fit:2 xj:4 restrict:2 bandwidth:1 reduce:1 cn:2 whether:1 motivated:1 caj:1 bartlett:1 reformulated:3 cause:3 jj:1 prefers:1 generally:2 tune:2 differentiability:7 http:2 outperform:1 exist:3 notice:1 sign:1 woznica:1 shall:2 affected:1 group:1 key:1 four:2 reformulation:1 redundancy:1 libsvm:4 utilize:1 v1:3 year:1 beijing:1 inverse:1 uncertainty:1 family:6 reasonable:1 chih:2 raman:1 prefer:1 acceptable:2 scaling:36 bound:7 ki:9 guaranteed:2 laskov:1 correspondence:1 fold:1 quadratic:1 nonnegative:4 nontrivial:1 rkl:27 constraint:43 software:2 tal:1 bousquet:1 speed:1 min:6 department:1 structured:1 according:1 ball:9 combination:32 kalousis:1 smaller:2 slightly:1 wi:5 lp:1 tw:1 making:1 happens:1 invariant:12 restricted:3 ghaoui:1 heart:1 equation:3 chiranjib:1 discus:1 cjlin:1 needed:1 singer:1 available:4 aharon:1 endowed:5 promoting:1 apply:1 hierarchical:1 v2:11 substitute:2 denotes:3 remaining:3 top:1 hinge:2 xc:2 meb:14 giving:1 yoram:1 k1:6 establish:2 objective:12 move:1 question:2 strategy:1 parametric:5 september:1 gradient:12 ak1:1 athena:1 mail:2 reason:1 besides:2 code:1 index:9 ratio:9 minimizing:10 ying:1 onen:1 kun:1 difficult:2 unfortunately:1 trace:1 mink:4 stated:2 negative:2 danskin:3 enclosing:9 design:1 reliably:3 proper:1 twenty:1 upper:1 finite:1 ecml:1 extended:1 y1:1 implicate:1 david:2 pair:1 kl:9 toolbox:1 connection:1 smo:3 learned:14 quadratically:1 algorithmics:1 nip:6 address:6 usually:4 below:3 sparsity:2 program:3 max:4 interpretability:1 suitable:6 natural:2 difficulty:1 rely:1 technology:2 library:1 conic:1 kj:11 understanding:1 l2:15 literature:2 loss:12 mixed:1 interesting:1 srebro:2 ar2:1 localized:1 validation:1 foundation:1 degree:1 principle:6 grandvalet:1 course:1 wpbc:1 mohri:2 supported:1 drastically:1 guide:1 face:1 sparse:7 benefit:3 dimension:1 xn:1 made:1 approximate:1 compact:1 implicitly:3 keep:1 global:14 xi:14 shwartz:1 continuous:5 search:3 sonar:1 table:4 learn:6 transfer:2 du:1 european:1 whole:1 allowed:1 x1:1 enlarged:2 referred:1 en:1 gai:1 precision:1 exponential:1 invocation:1 splice:2 dozen:1 theorem:6 enlarging:1 bad:1 specific:1 load:1 jen:1 r2:18 dki:5 svm:37 cortes:2 ionosphere:1 exists:1 workshop:1 vapnik:1 margin:34 kx:1 chen:1 gap:1 nk:9 wdbc:1 temporarily:1 g2:6 scalar:7 kwj:1 chang:1 springer:1 ramakrishnan:1 minimizer:3 satisfies:4 relies:1 ma:1 replace:1 change:2 hard:1 infinite:1 determined:1 uniformly:1 tushar:1 called:5 invariance:3 duality:1 atsch:1 exception:1 select:1 support:4 armijo:1 r41:1
3,262
3,955
Epitome driven 3-D Diffusion Tensor image segmentation: on extracting specific structures? Kamiya Motwani?? Nagesh Adluru? ? Computer Sciences University of Wisconsin ? Chris Hinrichs?? Andrew Alexander? Biostatistics & Medical Informatics University of Wisconsin {kmotwani,hinrichs,vsingh}@cs.wisc.edu Vikas Singh?? ? Medical Physics University of Wisconsin {adluru,alalexander2}@wisc.edu Abstract We study the problem of segmenting specific white matter structures of interest from Diffusion Tensor (DT-MR) images of the human brain. This is an important requirement in many Neuroimaging studies: for instance, to evaluate whether a brain structure exhibits group level differences as a function of disease in a set of images. Typically, interactive expert guided segmentation has been the method of choice for such applications, but this is tedious for large datasets common today. To address this problem, we endow an image segmentation algorithm with ?advice? encoding some global characteristics of the region(s) we want to extract. This is accomplished by constructing (using expert-segmented images) an epitome of a specific region ? as a histogram over a bag of ?words? (e.g., suitable feature descriptors). Now, given such a representation, the problem reduces to segmenting a new brain image with additional constraints that enforce consistency between the segmented foreground and the pre-specified histogram over features. We present combinatorial approximation algorithms to incorporate such domain specific constraints for Markov Random Field (MRF) segmentation. Making use of recent results on image co-segmentation, we derive effective solution strategies for our problem. We provide an analysis of solution quality, and present promising experimental evidence showing that many structures of interest in Neuroscience can be extracted reliably from 3-D brain image volumes using our algorithm. 1 Introduction Diffusion Tensor Imaging (DTI or DT-MR) is an imaging modality that facilitates measurement of the diffusion of water molecules in tissues. DTI has turned out to be especially useful in Neuroimaging because the inherent microstructure and connectivity networks in the brain can be estimated from such data [1]. The primary motivation is to investigate how specific components (i.e., structures) of the brain network topology respond to disease and treatment [2], and how these are affected as a result of external factors such as trauma. An important challenge here is to reliably extract (i.e., segment) specific structures of interest from DT-MR image volumes, so that these regions can then be analyzed to evaluate variations between clinically disparate groups. This paper focuses on efficient algorithms for this application ? that is, 3-D image segmentation with side constraints to preserve fidelity of the extracted foreground with a given epitome of the brain region of interest. DTI data are represented as a 3 ? 3 positive semidefinite tensor at each image voxel. These images provide information about connection pathways in the brain, and neuroscientists focus on the ? Supported by AG034315 (Singh), MH62015 (Alexander), UW ICTR (1UL1RR025011), and UW ADRC (AG033514). Hinrichs and Adluru are supported by UW-CIBM funding (via NLM 5T15LM007359). Thanks to Richie Davidson for assistance with the data, and Anne Bartosic and Chad Ennis for ground truth indications. The authors thank Lopamudra Mukherjee, Moo K. Chung, and Chuck Dyer for discussions and suggestions. 1 analysis of white-matter regions (these are known to encompass the ?brain axonal networks?). In general, standard segmentation methods yield reasonable results in separating white matter (WM) from gray-matter (GM), see [3]. While some of these algorithms make use of the tensor field directly [4], others utilize ?maps? of certain scalar-valued anisotropy measures calculated from tensors to partition WM/GM regions [5], see Fig. 1. But different pathways play different functional roles; hence it is more meaningful to evaluate group differences in a population at the level of specific white matter structures (e.g., corpus callosum, fornix, cingulum bundle). Part of the reason is that even significant volume differences in small structures may be overwhelmed in a pair-wise t-test using volume measures of the entire white matter (obtained via WM/GM segmentation [6]). To analyze variations in specific regions, we require segmentation of such structures as a first step. Unsupervised segmentation of specific regions of interest from DTI is difficult. Even interactive segmentation (based on gray-level fractional anisotropy maps) leads to unsatisfactory results unless guided by a neuroanatomical expert ? that is, specialized knowledge of the global appearance of the structure is essential in this process. Further, this is tedious for large datasets. One alternative is to use a set of already segmented images to facilitate processing of new data. Fortunately, since many studies use hand indicated regions for group analysis [7], such data is readily available. However, directly applying off the shelf toolboxes to learn a classifier (from such segmented images) does not work well. Part of the reason is that the local spatial context at each tensor voxel, while useful, is not sufficiently discriminative. In fact, the likelihood of a voxel to be assigned as part of the foreground (structure of interest) depends on whether the set of all foreground voxels (in entirety) match an ?appearance model? of the structure, in addition to being perceptually homogeneous. One strategy to model the first requirement is to extract features, generate a codebook dictionary of feature descriptors, and ask that distribution over the codebook (for foreground voxels) be consistent with the distribution induced by the expert-segmented foreground (on the same codebook). Putting this together with the homogeneity requirement serves to define the problem: segment a given DTI image (using MRFs, normalized cuts), while ensuring that the extracted foreground matches a known appearance model (over a bag of codebook features). The goal is related to recent work on simultaneous segmentation of two images called Cosegmentation [8, 9, 10, 11]. In the following sections, we formalize the problem and then present efficient segmentation methods. The key contributions of this paper are: (i) We propose a new algorithm for epitome-based graph-cuts segmentation, one which permits introduction of a bias to favor solutions that match a given epitome for regions of interest. (ii) We present an application to segmentation of specific structures in Diffusion Tensor Images of the human brain and provide experimental evidence that many structures of interest in Neuroscience can be extracted reliably from large 3-D DTI images. (iii) Our analysis provides a guarantee of a constant factor approximation ratio of 4. For a deterministic round-up strategy to obtain integral solutions, this approximation is provably tight. 2 Preliminaries We provide a short overview of how image segmentation is expressed as finding the maximum likelihood solution to a Conditional or Markov Random Field function. Later, we extend the model to include an additional bias (or regularizer) so that the configurations that are consistent with an epitome of a structure of interest turn out to be more likely (than other possibly lower energy solutions). Figure 1: Specific white matter structures such as Corpus Callosum, Interior Capsules, and Cingulum Bundle are shown in 3D (left), within the entire white matter (center), and overlaid on a Fractional Anisotropy (FA) image slice (right). Our objective is to segment such structures from DTI images. Note that FA is a scalar anisotropy measure often used directly for WM/GM segmentation, since anisotropy is higher in white matter. 2 2.1 Markov Random Fields (MRF) Markov Random Field based image segmentation approaches are quite popular in computer vision [12, 13] and neuroimaging [14]. A random field is assumed over the image lattice consisting of discrete random variables, x = {x1 , ? ? ? , xn }. Each xj ? x, j ? {1, ? ? ? , n} takes a value from a finite label set, L = {L1 , ? ? ? , Lm }. The set Nj = {i|j ? i} lists the neighbors of xj on the adjacency lattice, denoted as (j ? i). A configuration of the MRF is an assignment of each xj to a label in L. Labels represent distinct image segments; each configuration gives a segmentation, and the desired segmentation is the least energy MRF configuration. The energy is expressed as a sum of (1) individual data log-likelihood terms (cost of assigning xj to Lk ? L) and (2) pairwise smoothness prior (favor voxels with similar appearance to be assigned to the same label) [12, 15, 16]: min x,z subject to n X X wjk xjk + Lk ?L j=1 X cij zij (1) (i?j) |xik ? xjk | ? zij ?k ? {1, ? ? ? , m}, ?(i ? j) ? N where i, j ? {1, ? ? ? , n}, x is binary of size n ? m, z is binary of size |N |, (2) (3) where wjk is a unary term encoding the probability of j being assigned to Lk ? L, and cij is the pairwise smoothness prior (e.g., Generalized Potts model). The variable zij = 1 indicates that voxels i and j are assigned to different labels and x provides the assignment of voxel to labels (i.e., segments or regions). The problem is NP-hard but good approximation algorithms (including combinatorial methods) are known [16, 15, 17, 12]. Special cases (e.g., when c is convex) are known to be poly-time solvable [15]. Next, we discuss an interesting extension of MRF segmentation, namely Cosegmentation, which deals with the simultaneous segmentation of multiple images. 2.2 From Cosegmentation toward Epitome-based MRFs Cosegmentation uses the observation that while global histograms of images of the same object (in different backgrounds) may differ, the histogram(s) of the respective foreground regions in the image pair (based on certain invariant features) remain relatively stable. Therefore, one may perform a concurrent segmentation of the images with a global constraint that enforces consistency between histograms of only the foreground voxels. We first construct a codebook of features F (e.g., using RGB intensities) for images I (1) and I (2) ; the histograms on this dictionary are: (1) (1) (2) (2) H(1) = {H1 , ? ? ? , H? } and H(2) = {H1 , ? ? ? , H? } (b indexes the histogram bins), (u) such that Hb (j) = 1 if voxel j ? I (u) is most similar to codeword Fb , where u ? {1, 2}. If x(1) (1) and x(2) denote the segmentation solutions, and xj = 1 assigns voxel j of I (1) to the foreground, a measure of consistency between the foreground regions (after segmentation) is given by: ? X ? ? (1) (2) ? hHb , x(1) i, hHb , x(2) i . b=1 (4) (u) Pn (u) (u) where ?(?, ?) is a suitable similarity (or distance) function and hHb , x(u) i = j=1 Hb (j)xj , a count of the number of voxels in I (u) (from Fb ) assigned to the foreground for u ? {1, 2}. Using (4) to regularize the segmentation objective (1) biases the model to favor solutions where the foregrounds match (w.r.t. the codebook F), leading to more consistent segmentations. The form of ?(?, ?) above has a significant impact on the hardness of the problem, and different ideas have been explored [8, 9, 10]. For example, the approach in [8] uses the `1 norm to measure (and penalize) the variation, and requires a Trust Region based method for optimization. The sum of squared differences (SSD) function in [9] leads to partially optimal (half integral) solutions but requires solving a large linear program ? infeasible for the image sizes we consider (which are orders of magnitude larger). Recently, [10] substituted ?(?, ?) with a so-called reward on histogram similarity. This does lead to a polynomial time solvable model, but requires the similarity function to be quite discriminative (otherwise offering a reward might be counter-productive in this setting). 3 Optimization Model We start by using the sum of squared differences (SSD) as in [9] to bias the objective function and incorporate epitome awareness within the MRF energy in (1). However, unlike [9], where one 3 seeks a segmentation of both images, here we are provided the second histogram ? the epitome (representation) of the specific region of interest. Clearly, this significantly simplifies the resultant Linear Program. Unfortunately, it remains computationally intractable for high resolution 3-D image volumes (2562 ? 128) we consider here (the images are much larger than what is solvable by state of the art LP software, as in [9]). We propose a solution based on a combinatorial method, using ideas from some recent papers on Quadratic Pseudoboolean functions and their applications [18, 19]. This allows us to apply our technique on large scale image volumes, and obtain accurate results quite efficiently. Further, our analysis shows that we can obtain good constant-factor approximations (these are tight under mild conditions). We discuss our formulation next. We first express the objective in (1) with an additional regularization term to penalize histogram dissimilarity using the sum of squared differences. This gives the following simple expression, min x,z X (1) cij zij + i?j n X (1) wj0 (1 ? xj ) + j=1 n X (1) wj1 xj + ? j=1 ? X (1) (hHb , x(1) i ? b=1 H?b |{z} (2) hHb )2 ,x(2) i ? Since the epitome (histogram) is provided, the second argument of ?(?, ?) in (4) is replaced with H, and x(1) represents the solution vector for image I (1) . In addition, the term wj0 (and wj1 ) denote the unary cost of assigning voxel j to the background (and foreground), and ? is a user-specified tunable parameter to control the influence of the histogram variation. This yields min x,z X cij zij + wj0 (1 ? xj ) + j=1 i?j subject to n X |xi ? xj | ? zij n X wj1 xj + ? j=1 ? X 1 0 ?b + @hHb , xi2 ? 2hHb , xiH b=1 ? b2 A H |{z} constant ?(i ? j) where i, j ? {1, ? ? ? , n}, and x, z is binary, (5) The last term in (5) is constant. So, the model reduces to min x,z s.t. X i?j cij zij + n X j=1 |xi ? xj | ? zij wj0 (1 ? xj ) + n X wj1 xj + ? j=1 ? n X n X X b=1 Hb (j)Hb (l)xj xl ? 2 j=1 l=1 ?(i ? j) where i, j ? {1, ? ? ? , n}, and n X ! ?b Hb (j)xj H j=1 x, z is binary, (6) Observe that form, ?(x1 , ? ? ? , xn ) = P Q (6) can be expressed as a special case of the general n S?U ?S j?S xj where U = {1, ? ? ? , n}, x = (x1 , ? ? ? , xn ) ? B is a binary vector, S is a subset of U , and ?S denotes the coefficient of S. Such a function ? : Bn 7? R is called a pseudoBoolean function [18]. If the cardinality of S is no more than two, the corresponding form is ?(x1 , x2 , ? ? ? , xn ) = X j ?j xj + X ?ij xi xj (i,j) These functions are called Quadratic Pseudo-Boolean functions (QPB). In general if the objective permits a representation as a QPB, an upper (or lower) bound can be derived using roof (or floor) duality [18], recently utilized in several papers [19, 20, 21]. Notice that the function in (6) is a QPB because it has at most two variables in each term in the expansion. An advantage of the model derived above is that (pending some additional adjustments) we will be able to leverage an extensive existing combinatorial machinery to solve the problem. We discuss these issues in more detail next. 4 Reparameterization and Graph Construction Now we discuss a graph construction to optimize the above energy function by computing a maximum flow/minimum cut. We represent each variable as a pair of literals, xj and x ?j , which corresponds to a pair of nodes in a graph G. Edges are added to G based on various terms in the corresponding QPB. The min-cut computed on G will determine the assignments of variables to 1 (or 0), i.e., foreground/background assignment. Depending on how the nodes for a pair of literals are partitioned, we either get ?persistent? integral solutions (same as in optimal) and/or obtain variables assigned 21 (half integral) values and need additional rounding to obtain a {0, 1} solution. We will first reparameterize the coefficients in our objective as a vector denoted by ?. More specifically, we express the energy by collecting the unary and pairwise costs in (6) as the coefficients of the linear and quadratic variables. For a voxel j, we denote the unary coefficient as ?j and for a pair of voxels (i, j) we give their corresponding coefficients as ?ij . For presentation, we show 4 Voxel pairs (i, j) (vi ? vj ), (? vj ? v?i ) (vj ? vi ), (? vi ? v?j ) (? vj ? vi ), (? vi ? vj ) i ? j, i 6? =j 1 c 2 ij 1 2 cij 0 i 6? j, i ? =j 0 0 1 2? i ? j, i ? =j 1 c 2 ij 1 2 cij 1 2? Table 1: Illustration of edge weights introduced in the graph for voxel pairs. spatial adjacency as i ? j, and if i and j share a bin in the histogram we denote it as i ? = j, i.e., ?b : Hb (i) = Hb (j) = 1. The definition of the pairwise costs will include the following scenarios: ?ij = 8 > < > : cij ? cij ? if i ? j if i 6? j if i ? j if i ? j and and and and i 6? =j i? =j i? =j i? =j and and and and (i, j) assigned to different labels (i, j) assigned to foreground (i, j) assigned to different labels (i, j) assigned to foreground (7) The above cases enumerate three possible relationships between a pair of voxels (i, j): (i) (i, j) are spatial neighbors but not bin neighbors; (ii) (i, j) are bin neighbors, but not spatial neighbors; (iii) (i, j) are bin neighbors and spatial neighbors. In addition, the cost is also a function of label assignments to (i, j). Note that we assume i 6= j above since if i = j, we can absorb those costs in the unary terms (because xi ? xi = xi ). We define the unary costs for each voxel j next. ? ?j = wj0 if j is assigned to background ? b if j is assigned to foreground and ?b : Hb (i) = 1 wj1 + ? ? 2?H (8) With the reparameterization given as ? = s 1w [?j ?ij ]T done, we follow the recipe in 1 (w + ? ? 2?H ? b) 2 j0 [18, 22] to construct a graph (briefly sum2 j1 marized below). For each voxel j ? I, we introduce two nodes, vj and v?j . Hence, the size of the graph is 2|I|. We also have 1? two special nodes s and t which denote the 2 source and sink respectively. We connect I? I each node to the source and/or the sink 1? based on the unary costs, assuming that the 2 source (and sink) partitions correspond to foreground (and background). The source 1c is connected to the node vj with weight, 1 1 1 cji 2 cij 2 ji 2 cij 1 2 ? (w + ? ? 2? H ), and to node v ? with j1 b j 2 1w weight 12 wj0 . Nodes vj and v?j are in turn t 1 (wj1 + ? ? 2?H? b) 2 j0 2 1 connected to the sink with costs 2 wj0 and 1 ? 2 (wj1 + ? ? 2?Hb ) respectively. These bin neighbors spatial neighbors any voxel j edges, if saturated in a max-flow, count towards the node?s unary cost. Edges be- Figure 2: A graph to optimize (6). Nodes in the left box tween node pairs (except source and sink) represents vj ; nodes in the right box represent v?j . Colors give pairwise terms of the energy. These indicate spatial neighbors (orange) or bin neighbors (green). edge weights (see Table1) quantify all possible relationships of pairwise voxels and label assignments (Fig. 2). A maximum flow/minimum cut procedure on this graph gives a solution to our problem. After the cut, each node (for a voxel) is connected either to the source set or to the sink set. Using this membership, we can obtain a final solution (i.e., labeling) as follows. 8 < 0 1 xj = : 1 2 if vj ? s, v?j ? t if vj ? t, v?j ? s otherwise (9) A property of the solution obtained by (9) is that the variables assigned {0, 1} values are ?persistent?, i.e., they are the same in the optimal integral solution to (6). This means that the solution from the algorithm above is partially optimal [18, 20]. We now only need to find an assignment for the 12 variables (to 0 or 1) by rounding. The rounding strategy and analysis is presented next. 5 Rounding and Approximation analysis In general, any reasonable heuristic can be used to round 12 -valued variables to 0 or 1 (e.g., we can solve for and obtain a segmentation for only the 12 -valued variables without the additional bias). Our 5 experiments later make use of such a heuristic. The approximation analysis below, however, is based on a more conservative scheme of rounding all 12 -valued variables up to 1. We only summarize our main results here, the longer version of the paper includes details. A 2-approximation for the objective function (without the epitome bias) is known [16, 12]. The rounding above gives a constant factor approximation. Theorem 1 The rounding strategy described above gives a feasible solution to Problem (6). This solution is a factor 4 approximation to (6). Further, the approximation ratio is tight for this rounding. 6 Experimental Results Overview. We now empirically evaluate our algorithm for extracting specific structures of interest from DTI data, focusing on (1) Corpus Callosum (CC), and (2) Interior Capsule (IC) as representative examples. Our experiments were designed to answer the following main questions: (i) Can the model reliably and accurately identify the structures of interest? Note that general-purpose white matter segmentation methods do not extract specific regions (which is often obtained via intensive interactive methods instead). Solutions from our algorithm, if satisfactory, can be used directly for analysis or as a warm-start for user-guided segmentations for additional refinement. (ii) Does segmentation with a bias for fidelity with epitomes offer advantages over training a classifier on the same features? Clearly, the latter scheme will work nicely if the similarity between foreground/background voxels is sufficiently discriminative. Our experiments provide evidence that epitomes indeed offer advantages. (iii) Finally, we evaluate the advantages of our method in terms of relative effort expended by a user performing interactive extraction of CC and IC from 3-D volumes. Data and Setup. We acquired 25 Diffusion Tensor brain images in 12 non-collinear diffusion encoding directions (and one b = 0 reference image) with diffusion weighting factor of b = 1000s/mm2 . Standard image processing included correcting for eddy current related distortion, distortion from field inhomogeneities (using field maps), and head motion. From this data, the tensor elements were estimated using standard toolboxes (Camino [23]). The images were then hand-segmented (slice by slice) by experts to serve as the gold standard segmentation. Within a leave one out cross validation scheme, we split our set into training (24 images) and test set (hold out image). Epitomes were constructed using training data (by averaging tensor volumes and generating feature codeword dictionaries), and then specific structures in the hold out image were segmented using our model. Codewords used for the epitome also served to train a SVM classifier (on training data), which was then used to label voxels as foreground (part of structure of interest) or background, in the hold-out image. We present the mean of segmentation accuracy over 25 realizations. WM/GM DTI segmentation. To briefly elaborate on (i) above, we note that most existing DTI segmentation algorithms in the literature [24] focus on segmenting the entire white-matter (WM) from gray-matter (GM) where as the focus here is to extract specific structure within the WM path- Figure 3: WM/GM segmentation (without epitomes) from stanways, to facilitate the type of analysis dard toolkits, overlaid on FA maps (axial, sagittal views shown). being pursued in neuroscience studies [25, 2]. Fig. 3 shows results of a DTI image WM segmentation. Such methods segment WM well but are not designed to identify different components within the WM. Certain recent works [26] have reported success in identifying structures such as the cingulum bundle if a good population specific atlas is available (here, one initializes the segmentation by a sophisticated registration procedure). Dictionary Generation. A suitable codebook of features (i.e., F from ?2.2) is essential to modulate the segmentation (with an uninformative histogram, the process degenerates to a ordinary segmentation without epitomes). Results from our preliminary experiments suggested that the codeword generation must be informed by the properties/characteristics of Diffusion Tensor images. While general purpose feature extractors or interest-point detectors from Vision cannot be directly applied to tensor data, our simple scheme below is derived from these ideas. Briefly, by first setting up a neighborhood region around each voxel, we evaluate the local orientation context and shape in6 formation from the principal eigen vectors and eigen values of tensors at each neighboring voxel. Similar to Histogram of Oriented Gradients or SIFT, each neighboring voxel casts a vote for the primary eigen vector orientation (weighted by its eigen value), which encodes the distribution of tensor orientations in a local neighborhood around the voxel, as a feature vector. These feature vectors are then clustered, and each voxel is ?assigned? to its closest codeword/feature to give H(u) . Certain adjustments are needed for structurally sparse regions close to periphery of the brain surface, where we use all primary eigen vectors in a (larger) neighborhood window. This dictionary generation is not rotationally invariant since the orientation of the eigen-vectors are used. Our literature review suggests that there is no ?accepted? strategy for feature extraction from tensor-valued images. While the problem is interesting, the procedure here yields reasonable results for our purpose. We acknowledge that improvements may be possible using more sophisticated approaches. Implementation Details. Our implementation in C++ was interfaced with a QPB solver [22, 18]. We used a distance measure proposed in DTI-TK [23] which is popular in the neuroimaging literature, to obtain a similarity measure between tensors. The unary terms for the MRF component were calculated as the least DTI-TK metric distance between the voxel and a set of labels (generated by sampling from foreground in the training data). Pairwise smoothness terms were calculated using a spatial neighborhood of 18 neighbors. The parameter ? was set to 10 for all runs. 6.1 Results: User guided interactive segmentation, Segmentation with Epitomes and SVMs User study for interactive segmentation. To assess the amount of effort expended in obtaining a good segmentation of the regions of interest in an interactive manner, we set up a user study with two users who were familiar with (but not experts in) neuroanatomy. The users were presented with the ground truth solution for each image. The user provided ?scribbles? denoting foreground/background regions, which were incorporated into the segmentation via must-link/cannotlink constraints. Ignoring the time required for segmentation, typically 20-40 seeds were needed for each 2-D slice/image to obtain results close to ground-truth segmentations, which required ? 60s of user participation per 3-4 slices. Representative results are presented in Figs. 4?5 (column 5). Results from SVM and our model. For comparison, we trained a SVM classifier on the same set of voxel-codewords used for the epitomes. For training, feature vectors for foreground/background voxels from the training images were used, and the learnt function was used to classify voxels in the hold-out image. Representative results are presented in Figs. 4?5, overlaid on 2-D slices of Fractional Anisotropy. We see good consistency between our solutions and the ground truth in Figs. 4?5 where as the SVM results seem to oversegment, undersegment or pick up erroneous regions with similar contextual appearance to some voxels in the epitome. It is true that such a classification experiment with better (more discriminative) features will likely perform better; however, it is not clear how to reliably extract good quality features from tensor valued images. The results also suggest that our model exploits the epitome of such features rather well within a segmentation criterion. Quantitative Summary. For quantitative evaluations, we computed the Dice Similarity coefficient 2(A?B) between the segmentation solutions A and the expert segmentation B, given as |A|+|B| . On CC and IC, the similarity coefficient of our solutions were 0.62 ? 0.04 and 0.57 ? 0.05 respectively. The corresponding values for the SVM segmentation were 0.28 ? 0.06 and 0.15 ? 0.02 respectively. Hence, the null hypothesis using a two sample t-test can be rejected at ? = 0.01 (significance level). The running time of our algorithm was comparable to the running times of SVM using Shogun (a subset of voxels were used for training). It took ? 2 mins for our algorithm to solve the network flow on the graph, and < 4 mins to read in the images and construct the graph. While the segmentation results from the user-guided interactive segmentation are marginally better than ours, the user study above indicates that a significant level of interaction is required, which is already difficult for large 3-D volumes and becomes impractical for neuroimaging studies with tens of image volumes. 7 Discussion and Conclusions We present a new combinatorial algorithm for segmenting specific structures from DTI images. Our goal is to segment the structure while maintaining consistency with an epitome of the structure, generated from expert segmented images (note that this is different from top-down segmentation approaches [27], and algorithms which use a parametric prior [28, 11]). We see that direct application of max-margin methods does not yield satisfactory results, and inclusion of a segmentation-specific objective function seems essential. Our derived model can be optimized using a network flow pro7 Figure 4: A segmentation of the Corpus Callosum overlaid on FA maps. Rows refer to axial and sagittal views. Columns: (1) Tensors. (2) Ground truth. (3) Our solutions. (4) SVM results. (5) User-guided segmentation. Figure 5: A segmentation of the Interior Capsules overlaid on FA maps. Rows correspond to axial views. Columns: (1) Tensors. (2) Ground truth. (3) Our Solutions. (4) SVM results. (5) User-guided segmentation. cedure. We also prove a 4 factor approximation ratio, which is tight for the proposed rounding mechanism. We present experimental evaluations on a number of large scale image volumes which shows that the approach works well, and is also computationally efficient (2-3 mins). Empirical improvements seem possible by designing better methods of feature extraction from tensor-valued images. The model may serve to incorporate epitomes for general segmentation problems on other images as well. In summary, our approach shows that many structures of interest in neuroimaging can be accurately extracted from DTI data. References [1] J. Burns, D. Job, M. E. Bastin, et al. Structural disconnectivity in schizophrenia: a diffusion tensor magnetic resonance imaging study. The British J. of Psychiatry, 182(5):439?443, 2003. 1 8 [2] A. Pfefferbaum and E. Sullivan. Microstructural but not macrostructural disruption of white matter in women with chronic alcoholism. Neuroimage, 15(3):708?718, 2002. 1, 6 [3] T. Liu, H. Li, K. Wong, et al. Brain tissue segmentation based on DTI data. Neuroimage, 38:114?123, 2007. 2 [4] Z. Wang and B. Vemuri. DTI segmentation using an information theoretic tensor dissimilarity measure. Trans. on Med. Imaging, 24:1267?1277, 2005. 2 [5] P. A. Yushkevich, H. Zhang, T. J. Simon, and J. C. Gee. Structure-specific statistical mapping of white matter tracts using the continuous medial representation. In Proc. of MMBIA, 2007. 2 [6] N. Lawes, T. Barrick, V. Murugam, et al. Atlas based segmentation of white matter tracts of the human brain using diffusion tensor tractography and comparison with classical dissection. Neuroimage, 39:62? 79, 2008. 2 [7] C. B. Goodlett, T. P. Fletcher, J. H. Gilmore, and G. Gerig. Group analysis of DTI fiber tract statistics with application to neurodevelopment. Neuroimage, 45(1):S133 ? S142, 2009. 2 [8] C. Rother, T. Minka, A. Blake, and V. Kolmogorov. Cosegmentation of image pairs by histogram matching: Incorporating a global constraint into MRFs. In Comp. Vision and Pattern Recog., 2006. 2, 3 [9] L. Mukherjee, V. Singh, and C. Dyer. Half-integrality based algorithms for cosegmentation of images. In Comp. Vision and Pattern Recog., 2009. 2, 3, 4 [10] D. Hochbaum and V. Singh. An efficient algorithm for co-segmentation. In Intl. Conf. on Comp. Vis., 2009. 2, 3 [11] D. Batra, A. Kowdle, D. Parikh, et al. icoseg: Interactive co-segmentation with intelligent scribble guidance. In Comp. Vision and Patter Recog., 2010. 2, 7 [12] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. Trans. on Pattern Anal. and Machine Intel., 23(11):1222?1239, 2001. 3, 6 [13] V. Kolmogorov, Y. Boykov, and C. Rother. Applications of parametric maxflow in Computer Vision. In Intl. Conf. on Comp. Vision, 2007. 3 [14] Y. T . Weldeselassie and G. Hamarneh. DT -MRI segmentation using graph cuts. In Medical Imaging: Image Processing, volume 6512 of Proc. SPIE, 2007. 3 [15] D. Hochbaum. An efficient algorithm for image segmentation, markov random fields and related problems. J. of the ACM, 48(4):686?701, 2001. 3 [16] J. Kleinberg and E. Tardos. Approximation algorithms for classification problems with pairwise relationships: Metric partitioning and markov random fields. J. of the ACM, 49(5):616?639, 2002. 3, 6 [17] H. Ishikawa. Exact optimization for markov random fields with convex priors. Trans. on Pattern Anal. and Machine Intel, 25(10):1333?1336, 2003. 3 [18] E. Boros and P. Hammer. Pseudo-Boolean optimization. Disc. Appl. Math., 123:155?225, 2002. 4, 5, 7 [19] C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary mrfs via extended roof duality. In Comp. Vision and Pattern Recog., 2007. 4 [20] P. Kohli, A. Shekhovtsov, C. Rother, V. Kolmogorov, et al. On partial optimality in multi-label MRFs. In Intl. Conf. on Machine learning, 2008. 4, 5 [21] A. Raj, G. Singh, and R. Zabih. MRFs for MRIs: Bayesian reconstruction of MR images via graph cuts. In Comp. Vision and Pattern Recog., 2006. 4 [22] V. Kolmogorov and C. Rother. Minimizing nonsubmodular functions with graph cuts-a review. Trans. on Pattern Anal. and Machine Intel., 29(7):1274, 2007. 5, 7 [23] H. Zhang, P. A. Yushkevich, D. C. Alexander, and J. C. Gee. Deformable registration of diffusion tensor MR images with explicit orientation optimization. Medical Image Analysis, 10:764?785, 2006. 6, 7 [24] M. Rousson, C. Lenglet, and R. Deriche. Level set and region based surface propagation for diffusion tensor MRI segmentation. In Proc. of CVAMIA-MMBIA, volume 3117 of LNCS, pages 123?134, 2004. 6 [25] S. M. Smith, M. Jenkinson, H. Johansen-Berg, et al. Tract-based spatial statistics: Voxelwise analysis of multi-subject diffusion data. 31:1487?1505, 2006. 6 [26] S. Awate, H. Zhang, and Gee. J. A fuzzy, nonparametric segmentation framework for DTI and MRI analysis with applications to DTI tract extraction. Trans. on Med. Imaging, 26(11):1525?1536, 2007. 6 [27] E. Borenstein, E. Sharon, and S. Ullman. Combining top-down and bottom-up segmentation. In Comp. Vision and Pattern Recognition Workshop, 2004. 7 [28] C. Jingyu, Y. Qiong, W. Fang, et al. Transductive object cutout. In Comp. Vision and Pattern Recog., 2008. 7 9
3955 |@word mild:1 kohli:1 version:1 briefly:3 polynomial:1 norm:1 seems:1 mri:4 tedious:2 seek:1 rgb:1 bn:1 pick:1 configuration:4 liu:1 zij:8 offering:1 denoting:1 ours:1 existing:2 current:1 contextual:1 anne:1 assigning:2 moo:1 readily:1 must:2 partition:2 j1:2 shape:1 designed:2 atlas:2 medial:1 half:3 pursued:1 smith:1 short:1 provides:2 math:1 codebook:7 node:13 zhang:3 constructed:1 direct:1 persistent:2 prove:1 pathway:2 manner:1 introduce:1 acquired:1 pairwise:8 indeed:1 hardness:1 multi:2 brain:14 anisotropy:6 window:1 cardinality:1 solver:1 becomes:1 provided:3 biostatistics:1 null:1 what:1 fuzzy:1 informed:1 finding:1 nj:1 impractical:1 guarantee:1 pseudoboolean:2 dti:20 pseudo:2 collecting:1 quantitative:2 interactive:9 shogun:1 classifier:4 control:1 partitioning:1 medical:4 segmenting:4 positive:1 local:3 vsingh:1 encoding:3 toolkits:1 path:1 might:1 burn:1 suggests:1 appl:1 co:3 icoseg:1 enforces:1 sullivan:1 procedure:3 lncs:1 j0:2 maxflow:1 dice:1 empirical:1 significantly:1 goodlett:1 matching:1 word:1 pre:1 suggest:1 get:1 cannot:1 interior:3 close:2 epitome:23 applying:1 context:2 influence:1 wong:1 optimize:2 map:6 deterministic:1 center:1 chronic:1 convex:2 resolution:1 identifying:1 assigns:1 correcting:1 jingyu:1 regularize:1 fang:1 reparameterization:2 population:2 cutout:1 variation:4 tardos:1 construction:2 today:1 gm:7 play:1 user:14 exact:1 homogeneous:1 us:2 designing:1 hypothesis:1 element:1 recognition:1 utilized:1 mukherjee:2 cut:10 bottom:1 role:1 recog:6 wang:1 region:22 connected:3 counter:1 disease:2 reward:2 productive:1 trained:1 singh:5 tight:4 segment:7 solving:1 serve:2 sink:6 patter:1 represented:1 various:1 fiber:1 regularizer:1 kolmogorov:5 train:1 distinct:1 fast:1 effective:1 labeling:1 formation:1 neighborhood:4 quite:3 heuristic:2 larger:3 valued:7 solve:3 distortion:2 undersegment:1 otherwise:2 favor:3 statistic:2 transductive:1 inhomogeneity:1 final:1 advantage:4 indication:1 voxelwise:1 took:1 propose:2 reconstruction:1 interaction:1 neighboring:2 turned:1 combining:1 realization:1 degenerate:1 deformable:1 gold:1 wjk:2 recipe:1 motwani:1 requirement:3 table1:1 intl:3 jenkinson:1 generating:1 tract:5 leave:1 object:2 tk:2 derive:1 andrew:1 depending:1 axial:3 ij:6 job:1 c:1 entirety:1 indicate:1 quantify:1 differ:1 direction:1 guided:7 hammer:1 human:3 nlm:1 adjacency:2 bin:7 require:1 microstructure:1 clustered:1 preliminary:2 extension:1 hold:4 sufficiently:2 around:2 ground:6 ic:3 blake:1 overlaid:5 seed:1 mapping:1 fletcher:1 lm:1 dictionary:5 purpose:3 proc:3 bag:2 combinatorial:5 label:13 ictr:1 concurrent:1 callosum:4 weighted:1 minimization:1 clearly:2 rather:1 pn:1 shelf:1 endow:1 derived:4 focus:4 improvement:2 unsatisfactory:1 potts:1 likelihood:3 indicates:2 psychiatry:1 mrfs:6 membership:1 unary:9 typically:2 entire:3 hamarneh:1 provably:1 issue:1 fidelity:2 orientation:5 classification:2 denoted:2 resonance:1 spatial:9 special:3 art:1 orange:1 field:11 construct:3 nicely:1 extraction:4 sampling:1 mm2:1 represents:2 ishikawa:1 unsupervised:1 foreground:24 others:1 np:1 intelligent:1 inherent:1 deriche:1 oriented:1 preserve:1 homogeneity:1 individual:1 roof:2 familiar:1 replaced:1 consisting:1 neuroscientist:1 interest:16 investigate:1 evaluation:2 saturated:1 analyzed:1 semidefinite:1 bundle:3 accurate:1 integral:5 edge:5 partial:1 adrc:1 respective:1 machinery:1 unless:1 desired:1 xjk:2 guidance:1 instance:1 column:3 classify:1 boolean:2 assignment:7 lattice:2 ordinary:1 cost:10 subset:2 veksler:1 rounding:9 reported:1 connect:1 answer:1 learnt:1 thanks:1 off:1 physic:1 informatics:1 together:1 connectivity:1 squared:3 possibly:1 woman:1 literal:2 external:1 conf:3 expert:8 chung:1 leading:1 ullman:1 li:1 expended:2 b2:1 includes:1 coefficient:7 matter:15 depends:1 vi:6 later:2 h1:2 view:3 analyze:1 ennis:1 wm:11 start:2 simon:1 contribution:1 ass:1 cosegmentation:6 t15lm007359:1 accuracy:1 descriptor:2 characteristic:2 efficiently:1 who:1 yield:4 correspond:2 identify:2 interfaced:1 shekhovtsov:1 bayesian:1 qiong:1 accurately:2 disc:1 marginally:1 served:1 cc:3 comp:9 tissue:2 simultaneous:2 detector:1 definition:1 ul1rr025011:1 energy:8 richie:1 minka:1 resultant:1 spie:1 tunable:1 treatment:1 popular:2 ask:1 knowledge:1 fractional:3 color:1 segmentation:71 formalize:1 eddy:1 sophisticated:2 focusing:1 higher:1 dt:4 follow:1 formulation:1 done:1 box:2 rejected:1 hand:2 chad:1 trust:1 propagation:1 quality:2 indicated:1 gray:3 facilitate:2 normalized:1 true:1 gilmore:1 hence:3 assigned:14 regularization:1 read:1 satisfactory:2 white:13 deal:1 round:2 assistance:1 criterion:1 generalized:1 wj0:7 theoretic:1 l1:1 motion:1 disruption:1 image:65 wise:1 funding:1 recently:2 parikh:1 common:1 boykov:2 specialized:1 functional:1 ji:1 overview:2 empirically:1 volume:13 extend:1 measurement:1 significant:3 refer:1 in6:1 smoothness:3 consistency:5 inclusion:1 ssd:2 stable:1 similarity:7 longer:1 surface:2 closest:1 recent:4 optimizing:1 raj:1 driven:1 scenario:1 codeword:4 certain:4 periphery:1 binary:6 chuck:1 success:1 accomplished:1 rotationally:1 minimum:2 additional:7 fortunately:1 floor:1 mr:5 neuroanatomy:1 determine:1 ii:3 encompass:1 multiple:1 reduces:2 segmented:8 match:4 offer:2 cross:1 nonsubmodular:1 schizophrenia:1 cedure:1 ensuring:1 impact:1 mrf:7 vision:11 metric:2 histogram:16 represent:3 hochbaum:2 penalize:2 addition:3 want:1 background:9 uninformative:1 source:6 modality:1 borenstein:1 unlike:1 induced:1 subject:3 med:2 facilitates:1 flow:5 seem:2 extracting:2 axonal:1 structural:1 leverage:1 tractography:1 iii:3 split:1 hb:9 xj:21 topology:1 idea:3 simplifies:1 intensive:1 whether:2 expression:1 cji:1 effort:2 collinear:1 trauma:1 boros:1 enumerate:1 useful:2 clear:1 amount:1 nonparametric:1 ten:1 mmbia:2 zabih:2 svms:1 generate:1 adluru:3 notice:1 neuroscience:3 estimated:2 per:1 discrete:1 affected:1 express:2 group:5 putting:1 key:1 wisc:2 registration:2 diffusion:14 utilize:1 integrality:1 uw:3 imaging:6 graph:15 sharon:1 sum:4 run:1 respond:1 reasonable:3 comparable:1 bound:1 quadratic:3 constraint:6 x2:1 software:1 encodes:1 kleinberg:1 argument:1 min:8 reparameterize:1 optimality:1 performing:1 relatively:1 clinically:1 remain:1 partitioned:1 lp:1 making:1 lopamudra:1 invariant:2 dissection:1 computationally:2 remains:1 turn:2 discus:4 count:2 xi2:1 needed:2 mechanism:1 dyer:2 serf:1 available:2 permit:2 apply:1 observe:1 enforce:1 magnetic:1 alternative:1 eigen:6 vikas:1 neuroanatomical:1 denotes:1 running:2 include:2 top:2 maintaining:1 exploit:1 especially:1 classical:1 tensor:26 objective:8 initializes:1 already:2 added:1 question:1 codewords:2 strategy:6 primary:3 fa:5 parametric:2 exhibit:1 gradient:1 distance:3 thank:1 link:1 separating:1 chris:1 water:1 reason:2 toward:1 assuming:1 rother:5 index:1 relationship:3 illustration:1 ratio:3 minimizing:1 difficult:2 neuroimaging:6 cij:11 unfortunately:1 setup:1 xik:1 disparate:1 implementation:2 reliably:5 anal:3 perform:2 upper:1 observation:1 datasets:2 markov:7 finite:1 acknowledge:1 cingulum:3 extended:1 incorporated:1 head:1 intensity:1 introduced:1 pair:11 namely:1 specified:2 toolbox:2 connection:1 extensive:1 cast:1 required:3 optimized:1 xih:1 johansen:1 trans:5 address:1 able:1 suggested:1 below:3 pattern:9 challenge:1 summarize:1 program:2 including:1 max:2 green:1 suitable:3 cibm:1 warm:1 participation:1 solvable:3 gerig:1 scheme:4 lk:3 extract:6 wj1:7 prior:4 voxels:15 literature:3 review:2 relative:1 wisconsin:3 camino:1 suggestion:1 interesting:2 generation:3 validation:1 sagittal:2 awareness:1 consistent:3 share:1 row:2 summary:2 supported:2 last:1 infeasible:1 gee:3 side:1 bias:7 neighbor:12 ag033514:1 sparse:1 slice:6 calculated:3 xn:4 fb:2 author:1 dard:1 refinement:1 microstructural:1 voxel:21 scribble:2 approximate:1 absorb:1 global:5 corpus:4 assumed:1 kamiya:1 davidson:1 discriminative:4 xi:6 continuous:1 table:1 promising:1 learn:1 capsule:3 molecule:1 pending:1 ignoring:1 obtaining:1 expansion:1 poly:1 constructing:1 domain:1 substituted:1 hinrichs:3 vj:11 tween:1 main:2 significance:1 alcoholism:1 motivation:1 x1:4 advice:1 fig:6 representative:3 intel:3 hhb:7 elaborate:1 structurally:1 neuroimage:4 explicit:1 xl:1 weighting:1 extractor:1 theorem:1 down:2 erroneous:1 british:1 specific:20 showing:1 sift:1 list:1 explored:1 svm:8 evidence:3 essential:3 intractable:1 incorporating:1 workshop:1 magnitude:1 dissimilarity:2 perceptually:1 overwhelmed:1 margin:1 appearance:5 likely:2 expressed:3 adjustment:2 partially:2 scalar:2 corresponds:1 truth:6 extracted:5 acm:2 conditional:1 modulate:1 goal:2 presentation:1 lempitsky:1 towards:1 feasible:1 hard:1 vemuri:1 included:1 specifically:1 except:1 averaging:1 kowdle:1 principal:1 conservative:1 called:4 batra:1 duality:2 experimental:4 accepted:1 vote:1 meaningful:1 berg:1 latter:1 szummer:1 alexander:3 incorporate:3 evaluate:6
3,263
3,956
Copula Bayesian Networks Gal Elidan Department of Statistics Hebrew University Jerusalem, 91905, Israel [email protected] Abstract We present the Copula Bayesian Network model for representing multivariate continuous distributions, while taking advantage of the relative ease of estimating univariate distributions. Using a novel copula-based reparameterization of a conditional density, joined with a graph that encodes independencies, our model offers great flexibility in modeling high-dimensional densities, while maintaining control over the form of the univariate marginals. We demonstrate the advantage of our framework for generalization over standard Bayesian networks as well as tree structured copula models for varied real-life domains that are of substantially higher dimension than those typically considered in the copula literature. 1 Introduction Multivariate real-valued distributions are of paramount importance in a variety of fields ranging from computational biology and neuro-science to economics to climatology. Choosing and estimating a useful form for the marginal distribution of each variable in the domain is often a straightforward task. In contrast, aside from the normal representation, few univariate distributions have a convenient multivariate generalization. Indeed, modeling and estimation of flexible (skewed, multi-modal, heavy tailed) high-dimensional distributions is still a formidable challenge. Copulas [23] offer a general framework for constructing multivariate distributions using any given (or estimated) univariate marginals and a copula function C that links these marginals. The importance of copulas is rooted in Sklar?s theorem [29] that states that any multivariate distribution can be represented as a copula function of its marginals. The constructive converse is important from a modeling perspective as it allows us to separate the choice of the marginals and that of the dependence structure which is expressed in C. We can, for example, robustly estimate marginals using a non-parametric approach, and then use only few parameters to capture the dependence structure. This can result in a model that is easier to estimate and less prone to over-fitting than a fully nonparametric one, while at the same time avoiding the limitations of a fully parameterized distribution. In practice, copula constructions often lead to significant improvement in density estimation. Accordingly, there has been a dramatic growth of academic and practical interest in copulas in recent years, with applications ranging from mainstream financial risk assessment and actuarial analysis (e.g., Embrechts et al. [7]) to off-shore engineering (e.g., Accioly and Chiyoshi [2]). Despite the generality of the framework, constructing high-dimensional copulas is difficult, and much of the research involves only the bivariate case. Several works have attempted to overcome this difficulty by suggesting innovative ways in which bivariate copulas can be combined to form workable copulas of higher dimensions. These attempts, however, are either limited to hierarchical [26] or mixture of trees [14] compositions, or rely on a recursive construction of conditional bivariate copulas [1, 3, 17] that is somewhat elaborate for high dimensions. In practice, applications are almost always limited to a modest (< 10) number of variables (see Section 6 for further discussion). Bayesian networks (BNs) [25] offer a markedly different approach for representing multivariate distributions. In this widely used framework, a graph structure encodes independencies which imply a decomposition of the joint density into local terms (the density of each variable conditioned on its 1 parents). This decomposition in turn facilitates efficient probabilistic computation and estimation, making the framework amenable to high-dimensional domains. However, the expressiveness of these models is hampered by practical considerations that almost always lead to the the reliance on simple parametric forms. Specifically, non-parametric variants of BNs (e.g., [9, 27]) typically involve elaborate training setups with a running time that grows unfavorably with the number of samples and local graph connectivity. Furthermore, aside from the case of the normal distribution, the form of the univariate marginal is neither under control nor is it typically known. Our goal is to construct flexible multivariate continuous distributions that maintain desired marginals while accommodating tens and hundreds of variables, or more. We present Copula Bayesian Networks (CBNs), an elegant marriage between the copula and the Bayesian network frameworks.1 As in BNs, we make use of a graph to encode independencies that are assumed to hold. Differently, we rely on local copula functions and an explicit globally shared parameterization of the univariate densities. This allows us to retain the flexibility of BNs, while offering control over the form of the marginals, resulting in substantially improved multivariate densities (see Section 7 for a discussion of the related works of Kirshner [14] and Liu et al. [20]). At the heart of our approach is a novel reparameterization of a conditional density using a copula quotient. With this construction, we prove a parallel to the BN factorization theorem: a decomposition of the joint density according to the structure of the graph implies a decomposition of the joint copula. Conversely, a product of local copula-based quotient terms is a valid multivariate copula. This result provides us with a flexible modeling tool where joint densities are constructed via a composition of local copulas and marginal densities. Importantly, the construction also allows us to use standard BN machinery for estimation and structure learning. Thus, our model opens the door for flexible explorative learning of high-dimensional models that retain desired marginal characteristics. We learn the structure and parameters of a CBN for three varied real-life domains that are of a significantly higher dimension than typically reported in the copula literature. Using standard copula functions, we show that in all cases our approach leads to consistent and significant improvement in generalization when compared to standard BN models as well as a tree-structured copula model. 2 Copulas Let X = {X1 , . . . , XN } be a finite set of real-valued random variables and let FX (x) ? P (X1 ? x1 , . . . , Xn ? xN ) be a (cumulative) distribution function over X , with lower case letters denoting assignment to variables. By slight abuse of notation, we use F(xi ) ? F (Xi ? xi , XX /Xi = ?) and f(xi ) ? fXi (xi ), and similarly for sets of variables f (y) ? fY (y). A copula function [23, 29] links marginal distributions to form a multivariate one. Formally, Definition 2.1: Let U1 , . . . , UN be real random variables marginally uniformly distributed on [0, 1]. A copula function C : [0, 1]N ? [0, 1] is a joint distribution function C(u1 , . . . , uN ) = P (U1 ? u1 , . . . , UN ? uN ) Copulas are important because of the following seminal result Theorem 2.2: [Sklar 1959] Let F (x1 , . . . , xN ) be any multivariate distribution over real-valued random variables, then there exists a copula function such that F (x1 , . . . , xN ) = C(F(x1 ), . . . , F(xN )). Furthermore, if each F(xi ) is continuous then C is unique. The constructive converse which is of central interest from a modeling perspective is also true: since for any random variable the cumulative distribution F(xi ) is uniformly distributed on [0, 1], any copula function taking the marginal distributions {F(xi )} as its arguments, defines a valid joint distribution with marginals F(xi ). Thus, copulas are ?distribution-generating? functions that allow us to separate the choice of the univariate marginals and that of the dependence structure expressed in the copula function C, often resulting in an effective real-valued construction.2 . 1 A preliminary draft of this paper appeared as a technical report. A companion paper [6] addresses the question of performing approximate inference in Copula Bayesian networks. 2 Copulas can also be defined given non-continuous marginals and for ordinal random variables. These extensions are orthogonal to our work and to maintain clarity we focus here on the continuous case 2 Figure 1: Samples from the 2dimensional normal copula density using a correlation matrix with a unit diagonal and an off-diagonal coefficient of 0.25. (left) with zero mean and unit variance normal marginals; (right) with a mixture of two Gaussians marginals. 5 8 4 6 3 4 2 2 1 0 0 ?2 ?1 ?1 ?4 0 1 2 3 4 5 Normal(1, 1) marginals ?2 0 2 4 6 8 Mix of Gaussians marginals N ? F (x) To derive the joint density f (x) = ?x from the copula construction, assuming F has N-order 1 ...?xN partial derivatives (true almost everywhere when F is continuous), and using the chain rule, we have f (x) = Y ? N C(F(x1 ), . . . , F(xN )) Y f(xi ) = c(F(x1 ), . . . , F(xN )) f(xi ), ?F(x1 ) . . . ?F(xN ) i i (1) where c(F(x1 ), . . . , F(xN )), is called the copula density function. Eq. (1) will be of central use in this paper as we will directly model joint densities. Example 2.3: A simple copula widely explored in the financial community is the Gaussian copula constructed directly by inverting Sklar?s theorem [7]  C({F(xi )}) = ?? ??1 (F(x1 )), . . . , ??1 (F(xN )) , (2) where ? is the standard normal distribution and ?? is the zero mean normal distribution with correlation matrix ?. To get a sense of the power of copulas, Figure 1 shows samples generated from this copula using two different families of univariate marginals. More generally and without added computational difficulty, we can also mix and match marginals of different forms. 3 Copula Bayesian Networks (CBNs) As in the copula framework, our goal is to model real-valued multivariate distributions while taking advantage of the relative ease of one dimensional estimation. To cope with high-dimensional domains, as in BNs, we would also like to utilize independence assumptions encoded by a graph. To achieve this goal, we will construct multivariate copulas that are a composition of local copulas that follow the structure of the graph. We start with the building block of our construction. 3.1 Copula Parameterization of The Conditional Density As in the BN framework, the building block of our model will be a local conditional density. We start with a parameterization of such a density using copulas: Lemma 3.1: Let f (x | y), with y = {y1 , . . . , yK }, be a conditional density function and let f (x) be the marginal density of X. Then there exists a copula density function c(F(x), F(y1 ), . . . , F(yK )) such that f (x | y) = Rc (F(x), F(y1 ), . . . , F(yK ))f (x) where Rc is the ratio Rc (F(x), F(y1 ), . . . , F(yK )) ? R c(F(x), F(y1 ), . . . , F(yK )) c(F(x), F(y1 ), . . . , F(yK )) = , ? K C(1,F(y1 ),...,F(yK )) c(F(x), F(y1 ), . . . , F(yK ))f (x)dx ?F(y1 )...?F(yK ) and where Rc is defined to be 1 when y = ?. The converse is also true, for any copula density function c, Rc (F(x), F(y1 ), . . . , F(yK ))f (x) defines a valid conditional density function. Before proving this result, it is important to understand why the derivative form of denominator (right-most term) is more useful than the standard normalization integral R c(F(x), F(y1 ), . . . , F(yK ))f (x)dx. Recall that c() is itself an N -order derivative of the copula function so computing our denominator is no more difficult than computing c(). Indeed, for the majority of existing copula functions, both have an explicit form. In contrast, the integral term depends both on the copula form and the univariate marginal, and is generally difficult to compute. 3 Proof: From the basic properties of cumulative distribution functions, we have that for any copula function C(1, F(y1 ), . . . , F(yK )) = F (y1 , . . . , yk ) and thus, using the derivative chain rule, ? K C(1, F(y1 ), . . . , F(yK )) ? K C(1, F(y1 ), . . . , F(yK )) Y f (y) = = f (yk ). ?y1 , . . . , yK ?F(y1 ) . . . ?F(yK ) k From Eq. (1) we have that there Q exists a copula density for which f (x, y1 , . . . , yK ) = c(F(x), F(y1 ), . . . , F(yK ))f (x) k f (yk ). It follows that there exists a copula for which Q c(F(x), F(y1 ), . . . , F(yK ))f (x) k f (yk ) f (x, y1 , . . . , yK ) f (x | y) = = ? K C(1,F(y1 ),...,F(yK )) Q f (y) k f (yk ) ?F(y1 )...?F(yK ) = c(F(x), F(y1 ), . . . , F(yK ))f (x) ? K C(1,F(y1 ),...,F(yK )) ?F(y1 )...?F(yK ) ? Rc (F(x), F(y1 ), . . . , F(yK ))f (x) As in Sklar?s theorem and Eq. (1), the converse follows easily by reversing the arguments. The implications of this result will underlie our construction: any copula density function c(x, y1 , . . . , yK ), together with f (x), can be used to parameterize a conditional density f (x | y). 3.2 Decomposition of The Joint Copula Let G be a directed acyclic graph whose nodes correspond to the random variables X , and let Pai = {Pai1 , . . . , Paiki } be the parents of Xi in G. G encodes the independence statements I(G) = {(Xi ? NonDescendantsi | Pai )}, where NonDescendantsi are nodes that are non-descendants of Xi in G. We say that fX (x)Qdecomposes according to G if it can be written as a product of conditional densities fX (x) = i f (Xi | Pai ). It can be shown that if f decomposes according to G then I(G) hold in fX (x). The converse is also true: if I(G) hold in fX (x) then the density decomposes according to G (see [16], theorems 3.1 and 3.2). These results form the basis for the BN model [25] where a joint density is constructed via a composition of local conditional densities. We now show that similar results hold for a multivariate copula. This in turn will provide the basis for our construction of the CBN model. Theorem 3.2 : Decomposition. Let G be a directed acyclic graph over Q X , and let fX (x) be parameterized via a joint copula density fX (x) = c(F(x1 ), . . . , F(xN )) i f(xi ), with fX (x) strictly positive for all values of X . If fX (x) decomposes according to G then the copula density c(F(x1 ), . . . , F(xN )) also decomposes according to G Y c(F(x1 ), . . . , F(xN )) = Rci (F(xi ), {F(paik )}), i where ci is a local copula that depends only on the value of Xi and its parents in G. Proof: Using the positivity assumption, we can rearrange Eq. (1) to get c(F(x1 ), . . . , F(xN )) = Qf (x) . From Lemma 3.1 and the decomposition of f (x) we have i f(xi ) Q f (xi | pai ) f (x) c(F(x1 ), . . . , F(xN )) = Q = iQ i f(xi ) i f(xi ) Q Y R (F(x ), {F(pa )})f(x ) i i ik i ci Q = = Rci (F(xi ), {F(paik )}) f(x ) i i i The constructive converse that is of central interest here is also true: Theorem 3.3 : Composition. Let G be a directed acyclic graph over X . In addition, let {ci (F(xi ), F(pai1 ), . . . , F(paiki ))} be a set of strictly positive copula densities associated with the nodes of G that have at least one parent. If I(G) hold then the function Y g(F(x1 ), . . . , F(xN )) = Rci (F(xi ), {F(paik )}), i is a valid copula density c(F(x1 ), . . . , F(xN )) over X . 4 This above theorem can be proved directly via induction or using our reparameterization lemma and standard BN results. It is important to note that the local copulas do not need to agree on the non-univariate marginals of overlapping variables. This is a result of the fact that each copula ci only appears as part of a quotient term which is used to parameterize a conditional density. This gives us the freedom to mix and match local copulas of different types. Equally important is the fact that aside from the univariate densities, we do not need to concern ourselves with any marginal constraints when estimating the parameters of these local copulas functions. 3.3 A Multivariate Copula Model We are now ready to construct a joint density given univariate marginals by properly composing local terms and without worrying about global coherence: Definition 3.4: A Copula Bayesian Network (CBN) is a triplet C = (G, ?C , ?f ) that encodes the joint density fX (x). ?C is a set of local copula densities functions ci (F(xi ), {F(paik )}) that are associated with the nodes of G that have at least one parent. ?f is the set of parameters representing the marginal densities f(xi ). fX (x) is parameterized as Y Rci (F(xi ), {F(paik )})f(xi ). fX (x) = i Using our previous developments and applying Eq. (1) to fX (x), we have: Corollary 3.5 : A Copula Bayesian Network defines a valid joint density fX (x) whose marginal distributions are parameterized by ?f and where the independence statements I(G) hold. The main difference between the CBN model and a regular BN, aside from a novel choice for the local conditional parameterization, is in the shared global component that has the explicit semantics of the univariate marginals. Concretely, the CBN model allows us to decompose the problem of representing a multivariate distribution with given (or estimated) univariate marginals into many local problems that, depending on the structure of G, can be substantially smaller in dimension. For each family of Xi and its parents we are still faced with the problem of choosing an appropriate local copula. In this work we simply limit ourselves to copulas that have convenient multivariate form, but any of the recently suggested methods for constructing multivariate copulas functions (see Section 6) can also be used. In either case, limiting ourselves to a smaller number of variables (a node and its parents) makes the construction of the local copula substantially easier than the construction of the full copula over X . Importantly, as in the case of BNs, our construction of a joint copula density that decomposes over the graph structure G also facilitates efficient parameter estimation and model selection (structure learning), as we briefly discuss in the next section. 4 Learning As in the case of BNs, the product form of our CBN facilitates relatively efficient estimation and model selection. The machinery is standard and only briefly described below. Parameter Estimation Given a complete dataset D of M instances where all of the variables X are observed in each instance, the log-likelihood of the data given a CBN model C is PM P PM P `(D : C) = m=1 i log f (xi [m]) + m=1 i log Ri (F(xi )[m], F(pai1 [m]), . . . , F(paiki [m])) While this objective appears to fully decompose according to the structure of G, each marginal distribution F(xi ) actually appears in several local copula terms (of Xi and its children in G). To facilitate efficient estimation, we adopt the common approach where the marginals are estimated first [13]. Given F(xi ), we can then estimate the parameters of each local copula independently of the others. We estimate the univariate densities using a standard normal kernel-based approach [24]. In this work we consider two of the simplest and most commonly used copula functions. For  Frank?s Q Archimedean copula C(u1 , . . . , uN ) = ? ?1 log 1 + i (e??F(xi ) ? 1)/(e?? ? 1)N ?1 , and for the Gaussian copula (see Section 2) with a uniform correlation parameter, we find the maximum 5 10?fold train log?probability / instance Wine Train Dow Jones Train ?16 200 ?4 ?18 180 ?5 ?20 ?6 ?22 ?7 ?24 ?8 ?26 ?9 ?28 ?10 ?30 160 140 120 100 80 Sigmoid BN Kernel?Gaussian CBN Kernel?UnifCorr CBN Kernel?Frank?s CBN Normal?UnifCorr CBN 60 ?11 ?0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 ?32 4.5?0.5 40 0 Maximum number of parents 0.5 1 1.5 2 2.5 3 3.5 4 20 4.5 ?0.5 0 Maximum number of parents Wine Test 0.5 1 2 2.5 3 3.5 4 4.5 Crime Test ?18 ?4 1.5 Maximum number of parents Dow Jones Test ?3 10?fold test log?probability / instance Crime Train ?3 200 180 ?20 160 ?5 ?22 140 ?6 ?24 120 ?7 100 ?26 ?8 80 ?28 Sigmoid BN Kernel?Gaussian CBN Kernel?UnifCorr CBN Kernel?Frank?s CBN Normal?UnifCorr CBN ?9 60 ?30 ?10 ?11 ?0.5 40 0 0.5 1 1.5 2 2.5 3 3.5 4 ?32 4.5?0.5 Maximum number of parents 0 0.5 1 1.5 2 2.5 3 3.5 Maximum number of parents 4 20 4.5 ?0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Maximum number of parents Figure 2: Train and test set performance for the 12 variable Wine, 28 variable Dow Jones and 100 variables Crime datasets. Models compared: Sigmoid BN; CBN with a uniform correlation normal copula (single parameter); CBN with a full normal copula (0.5 ? d(d ? 1) parameters); CBN with Frank?s single parameter copula. Shown is the 10-fold average log-probability per instance (y-axis) vs. the maximal number of parents allowed in the network (x-axis). Error bars (slightly shifted for readability) show the 10 ? 90% range. The structure for all models was learned with the same search procedure using the BIC model selection score. likelihood parameters using a standard conjugate gradient algorithm. For the Gaussian copula with a full covariance matrix, a reasonably effective and substantially more efficient method is based on the relationship between the copula function and Kendall?s Tau dependence measure [19]. For lack of space, further details for both of these copulas are provided in the supplementary material. Model Selection Very briefly, to learn the structure of G, we use a standard score-based approach that starts with the empty network, and greedily advances via local modifications to the current structure (add/delete/reverse edge). The search is guided by the Bayesian information criterion [28] that bal? G) ? 1 log(M )|?G |, ances the likelihood of the model and its complexity score(G : D) = `(D : ?, 2 where ?? are the maximum-likelihood parameters, and |?G | is the number of free parameters associated with the graph structure G. During the search, we also use a TABU list and random restarts [10] to mitigate the problem of local maxima. See Koller and Friedman [16] for more details. 5 Experimental Evaluation We assess the effectiveness of our approach for density estimation by comparing CBNs and BNs learned from training data in terms of log-probability performance on test data. For BNs, we use a linear Gaussian conditional density and a non-linear Sigmoid one (see Koller and Friedman [16]). For CBNs, to demonstrate the flexibility of our framework, we consider the three local copula functions discussed in Section 4: fully parametrized Normal copula; the same copula with a single correlation parameter and unit diagonal (UnifCorr); Frank?s single parameter Archimedean copula. We use standard normal kernel density estimation for the univariate densities. The structure of both the BN and CBN models was learned using the same greedy structure search procedure described in Section 4. We consider three datasets of a markedly different nature and dimensionality: ? Wine Quality (UCI repository). 11 physiochemical properties and a sensory quality variable for the red Portuguese ?Vinho Verde? wine [4]. Included are measurements from 1599 tastings. 6 32 360 30 340 # edges in competitor # edges in competitor Figure 3: Comparison of the number of edges learned in the different random run for different models (y-axis) vs. the Sigmoid BN model (x-axis), when the maximal number of parents in the network was limited to 4. 28 26 24 22 20 18 Kernel?UnifCorr CBN Gaussian BN Normal?UnifCorr CBN 320 300 280 260 240 220 200 16 180 14 14 16 18 20 22 24 26 28 # edges in Sigmoid BN Wine dataset 30 32 180 200 220 240 260 280 300 320 340 360 # edges in Sigmoid BN Crime dataset ? Dow Jones. 2001-2005 (1508 trading days) daily adjusted changes of the 30 index stocks. To avoid arbitrary imputation, two stocks not traded in all of these days were excluded (KFT,TRV). ? Crime (UCI repository). 100 observed variables relating to crime ranging from household size to fraction of children born outside of a marriage, for 1994 communities across the U.S. Figure 2 compares average log-probability (y-axis) for 10 random equal train/test splits as a function of the maximal number of parents allowed in the network (x-axis). Results for the linear Gaussian BN were almost identical to those of the sigmoid BN for the Wine and Dow Jones datasets and inferior for the Crime dataset, and are omitted for clarity. For all datasets, the copula based models offer a clear gain in training performance as well as in generalization on unseen test instances. Remarkably, the single parameter (for each local density) UnifCorr model is superior to the BN model even when the latter utilizes up to 8 local parameters (with 4 parents). In fact, even Frank?s single parameter Archimedean copula which is constrained by the fact that all of its K-marginals are equal [23], is superior to the BN model. Importantly, the advantage of the CBN model is significant as the units of improvement are in bits/instance. That is, an improvement of 2 bits/instance translates into each test instance being, on average, four times as likely.3 It is also important to note the benefit that comes with structures that are richer than a tree. As the number of allowed parents (x-axis) is increased, gains are relatively small when the dimensionality of the domain is limited (12 variables); The gains are, however, quite substantial for the more complex domains. To understand the role of the univariate marginals, we start with the no dependency network (0 on x-axis), where the advantage of CBNs is solely due to the use of flexible univariate marginals. Surprisingly, even with single parameter copulas, although much simpler than the Sigmoid form used for the BN model, we are able to maintain much of that advantage as the model becomes more complex. As expected, this is not the case when we constrain the CBN model to have normal marginals (Normal-UnifCorr) and when the domain is sufficiently complex (Crime). To get a sense of the overall dependency structure, Figure 3 shows the number of edges learned for the different models. For the Wine dataset, the linear BN attempts to compensate for its constrained form by using substantially more edges than the non-linear Sigmoid BN. The Kernel-UnifCorr CBN, in contrast, tends to use less edges while achieving higher test performance. Finally, the Normal-UnifCorr CBN model, despite the forced normal marginals, does not lead to overly complex structures as it is constrained by the simplicity of the copula function (single parameter). For the challenging Crime dataset, the differences are more pronounced: both the linear and non-linear BN models almost saturate the limit of 4 parents per variable, while the Kernel-UnifCorr copula model requires, on average, less than half the number of parents to achieve superior performance. Finally, in Figure 4, we demonstrate the qualitative advantage of CBNs by comparing empirical values from the test data (left) with samples generated from the different models. For the ?physical density? and ?alcohol? variables (top), the CBN samples (middle) are better than the BN ones (right), but not dramatically so. However, for the ?residual sugar? and ?physical density? pair (bottom), where the empirical dependence is far from normal, the advantage of the CBN representation is clear. We recall that the CBN model uses a simple normal copula so that the advantage is solely rooted in the distortion of the input to the copula created by the kernel-based univariate representation. With more expressive copulas we can expect further qualitative and quantitative advantages. 3 Note that the performance for the crime domain is on an unusually high scale since some of the variables are closely correlated, leading to peaked densities. We emphasize that this does not effect the relative merit of a method - an advantage of a bit/instance still translates to each instance being, on average, twice as likely. 7 6 Empirical CBN Samples Alcohol level Alcohol level BN Samples 14 14 14 13 12 11 10 9 8 0.99 0.995 1 13 13 Alcohol level 15 12 11 10 9 0.995 1 0.995 4 6 8 10 12 Residual sugar 9 0.99 14 16 1 0.995 0.99 0.995 1 1.005 1.01 Physical density 1.01 Physical density Physical density 1 2 10 7 0.985 1.005 1.005 0 11 Physical density 1.005 0.99 12 8 8 0.99 1.005 Physical density Physical density Figure 4: Demonstration of the dependency learned for the Wine dataset for two variable pairs. Compared is the empirical distribution in the test data (left) with samples generated from the learned CBN (middle) and BN (right) models. To eliminate the effect of differences in structure, the CBN model was forced to use the structure learned for the BN model which contains the network fragment ?residual sugar? ? ?physical density? ? ?alcohol level?. 0 2 4 6 8 10 12 Residual sugar 14 16 1.005 1 0.995 0.99 0.985 ?1 0 1 2 3 4 5 6 7 Residual sugar Related Work For lack of space we do not discuss direct multivariate copula constructions (e.g., [8, 15, 18, 22]) that are typically effective only for few dimensions, and focus on composite constructions that build on smaller (bivariate) copulas. The Vine model [3] relies on a recursive construction of bivariate copulas to parameterize a multivariate one. Although it uses a graphical representation, the framework is inherently different from ours: conditional independence is replaced with a conditional dependence whose parameters depend on the conditioning variable(s). Kurwicka and Cooke [17] reveal a direct connection between vines and belief networks, but that is limited to the scenario of elliptical bivariate copulas. Relying on the same representation, Aas et al. [1] suggest an alternative construction methodology. While the vine representation is certainly general, the need to condition on many variables using a somewhat elaborate construction limits practical applications to a modest number of variables. Aas et al. [1] do note the simplification that can result from making independence assumptions, but do not provide a general framework for doing so. Savu and Trede [26] suggest an alternative model that is limited to a hierarchical tree structure of bivariate Archimedean copulas. Kirshner [14] uses the copula product operator of Darsow et al. [5] to suggest a mixture of trees model that is directly motivated by the field of graphical models. The relationship between our model to theirs is the same as that of a general BN to a mixture of trees model [21]. Most recently, Liu et al. [20] consider a general sparse undirected copula-based model that is focused on the semi and non-parametric aspect of modeling, and is specific to the case of the normal copula. Finally, it is important to put the dimension of the domains we consider in this work (up to 100 variables) in perspective. Copula applications are numerous yet most are limited to a relatively small number (< 10) of variables. Heinen and Alfonso [11] are unique in that they consider 95 variables, but using an approach that is tailored to the specific details of the GARCH model. 7 Discussion and Future Work We presented Copula Bayesian Networks, a marriage between the Bayesian network and copula frameworks. Building on a novel reparameterization of the conditional density, our model offers great flexibility in modeling high-dimensional continuous distribution while offering control over the form of the univariate marginals. We applied our approach to three markedly different real-life datasets and, in all cases, demonstrated a consistent and significant generalization advantage. Our contribution is threefold. First, our framework allows us to flexibly ?mix and match? local copulas and univariate densities of any form. Second, like BNs, we allow for independence assumptions that are more expressive than those possible with tree-based constructions, leading to generalization advantages. Third, we leverage on existing machinery to perform model selection in significantly higher dimensions than typically considered in the copula literature. Thus, our work opens the door for numerous applications where the flexibility of copulas is needed but could not be previously utilized. In a companion paper [6], we also show that CBNs give rise to an efficient inference procedure. The gap between train and test performance for CBNs motivates the development of model selection scores tailored to the copula framework (e.g., based on rank correlation). It would also be interesting to see if our framework can be adapted to the cumulative scenario, while allowing for independencies quite different from the recently introduced cumulative network model [12]. 8 Acknowledgements I am grateful to Ariel Jaimovich, Amir Globerson, Nir Friedman and Fabio Spizzichino for their comments on earlier drafts of this manuscript. G. Elidan was supported by the Alon fellowship. References [1] K. Aas, C. Czado, A. Frigessi, and H. Bakken. Pair-copula constructions of multiple dependencies. Insurance: Mathematics and Economics, 44:182?198, 2009. [2] R. Accioly and F. Chiyoshi. Modeling dependence with copulas: a useful tool for field development decision process. Journal of Petroleum Science and Engineering, 44:83?91, 2004. [3] T. Bedford and R. Cooke. Vines - a new graphical model for dependent random variables. Annals of Statistics, 30(4):1031?1068, 2002. [4] P. Cortez, A. Cerdeira, F. Almeida, T. Matos, and J. Reis. Modeling wine preferences by data mining from physicochemical properties. Decision Support Systems, 47(4):547?553, 2009. [5] W. Darsow, B. Nguyen, and E. Olsen. Copulas and Markov processes. Illinois J Math, 36:600?642, 1992. [6] G. Elidan. Inference-less density estimation using Copula Bayesian Networks. In Uncertainty in Artificial Intelligence (UAI), 2010. [7] P. Embrechts, F. Lindskog, and A. McNeil. Modeling dependence with copulas and applications to risk management. Handbook of Heavy Tailed Distributions in Finance, 2003. [8] M. Fischer and C. Kock. Constructing and generalizing given multivariate copulas. Technical report, Working paper, University of Erlangen-Nurnberg, Nurnberg, 2007. [9] N. Friedman and I. Nachman. Gaussian Process Networks. In Uncertainty in AI (UAI), 2000. [10] F. Glover and M. Laguna. Tabu search. In C. Reeves, editor, Modern Heuristic Techniques for Combinatorial Problems, Oxford, England, 1993. Blackwell Scientific Publishing. [11] A. Heinen and A. Alfonso. Asymmetric CAPM dependence for large dimensions: The canonical vine autoregressive copula model. ECORE Discussion Paper, 2008. [12] J. Huang and B. Frey. Cumulative distribution networks and the derivative-sum-product algorithm. In Uncertainty in Artificial Intelligence (UAI), 2008. [13] H. Joe and J. Xu. The estimation method of inference functions for margins for multivariate models. Technical Report 166, Department of Statistics, University of British Columbia, 1996. [14] S. Kirshner. Learning with tree-averaged densities and distributions. In Neural Information Processing Systems (NIPS), 2007. [15] K. Koehler and J. Symanowski. Constructing multivariate distributions with specific marginal distributions. Journal of Multivariate Distributions, 55:261?282, 1995. [16] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT, 2009. [17] D. Kurwicka and R. Cooke. The vine copula method for representing high dimensional dependent distributions: Applications to continuous belief nets. In The Winter Simulation Conference, 2002. [18] E. Liebscher. Modelling and estimation of multivariate copulas. Technical report, Working paper, University of Applied Sciences, Merseburg, 2006. [19] F. Lindskog, A. McNeil, and U. Schmock. Kendall?s tau for elliptical distributions. Credit Risk - measurement, evaluation and management, pages 149?156, 2003. [20] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:22952328, 2010. [21] M. Meila and M. Jordan. Estimating dependency structure as a hidden variable. In Neural Information Processing Systems (NIPS), 1998. [22] P. Morillas. A method to obtain new copulas from a given one. Metrika, 61:169?184, 2005. [23] R. Nelsen. An Introduction to Copulas. Springer, 2007. [24] E. Parzen. On estimation of a probability density function and mode. Annals of Mathematical Statistics, 33:1065?1076, 1962. [25] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [26] C. Savu and M. Trede. Hierarchical archimedean copulas. In the Conf on High Frequency Finance, 2006. [27] A. Schwaighofer, M. Dejori, V. Tresp, and M. Stetter. Structure Learning with Nonparametric Decomposable Models. In the International Conference on Artificial Neural Networks, 2007. [28] G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461?464, 1978. [29] A. Sklar. Fonctions de repartition a n dimensions et leurs marges. Publications de l?Institut de Statistique de L?Universite de Paris, 8:229?231, 1959. 9
3956 |@word repository:2 middle:2 briefly:3 frigessi:1 cortez:1 open:2 simulation:1 bn:28 decomposition:7 covariance:1 dramatic:1 liu:3 born:1 score:4 contains:1 fragment:1 offering:2 denoting:1 ours:1 nonparanormal:1 existing:2 current:1 comparing:2 elliptical:2 yet:1 dx:2 written:1 portuguese:1 kft:1 explorative:1 aside:4 v:2 greedy:1 half:1 intelligence:2 metrika:1 parameterization:4 accordingly:1 amir:1 cbns:8 provides:1 draft:2 node:5 readability:1 preference:1 math:1 simpler:1 rc:6 glover:1 constructed:3 direct:2 mathematical:1 ik:1 descendant:1 prove:1 qualitative:2 fitting:1 expected:1 indeed:2 nor:1 multi:1 globally:1 relying:1 trv:1 becomes:1 provided:1 estimating:5 notation:1 xx:1 formidable:1 israel:1 substantially:6 gal:1 mitigate:1 quantitative:1 unusually:1 growth:1 finance:2 control:4 unit:4 converse:6 underlie:1 cbn:31 before:1 positive:2 engineering:2 local:26 frey:1 tends:1 limit:3 laguna:1 despite:2 oxford:1 solely:2 abuse:1 twice:1 tasting:1 conversely:1 challenging:1 ease:2 limited:7 factorization:1 range:1 averaged:1 directed:3 practical:3 unique:2 globerson:1 practice:2 recursive:2 block:2 procedure:3 empirical:4 significantly:2 shore:1 convenient:2 composite:1 statistique:1 regular:1 suggest:3 get:3 liebscher:1 selection:6 operator:1 put:1 risk:3 applying:1 seminal:1 demonstrated:1 jerusalem:1 economics:2 straightforward:1 independently:1 flexibly:1 focused:1 simplicity:1 decomposable:1 wasserman:1 rule:2 nondescendantsi:2 importantly:3 tabu:2 financial:2 reparameterization:4 proving:1 fx:14 limiting:1 annals:3 construction:19 us:3 pa:1 utilized:1 asymmetric:1 observed:2 role:1 bottom:1 capture:1 parameterize:3 vine:6 yk:32 substantial:1 complexity:1 sugar:5 depend:1 grateful:1 basis:2 easily:1 joint:15 differently:1 stock:2 represented:1 sklar:5 train:7 actuarial:1 forced:2 effective:3 paiki:3 artificial:3 choosing:2 outside:1 whose:3 encoded:1 widely:2 valued:5 supplementary:1 say:1 richer:1 quite:2 distortion:1 heuristic:1 statistic:5 fischer:1 unseen:1 capm:1 itself:1 advantage:13 net:1 product:5 maximal:3 ecore:1 uci:2 flexibility:5 achieve:2 pronounced:1 parent:20 empty:1 nelsen:1 generating:1 derive:1 iq:1 ac:1 depending:1 alon:1 eq:5 quotient:3 involves:1 implies:1 trading:1 come:1 guided:1 closely:1 material:1 kirshner:3 generalization:6 preliminary:1 decompose:2 adjusted:1 extension:1 strictly:2 hold:6 marriage:3 credit:1 considered:2 normal:22 sufficiently:1 great:2 traded:1 adopt:1 omitted:1 wine:10 estimation:16 nachman:1 combinatorial:1 schwarz:1 tool:2 mit:1 always:2 gaussian:9 verde:1 avoid:1 publication:1 corollary:1 encode:1 focus:2 improvement:4 properly:1 rank:1 likelihood:4 modelling:1 contrast:3 greedily:1 sense:2 am:1 inference:4 dependent:2 typically:6 eliminate:1 hidden:1 koller:3 semantics:1 overall:1 flexible:5 development:3 constrained:3 copula:129 marginal:13 field:3 construct:3 equal:2 biology:1 pai:4 identical:1 jones:5 peaked:1 rci:4 future:1 report:4 others:1 intelligent:1 few:3 modern:1 winter:1 paik:5 replaced:1 ourselves:3 maintain:3 attempt:2 freedom:1 friedman:5 interest:3 mining:1 workable:1 insurance:1 evaluation:2 certainly:1 mixture:4 rearrange:1 chain:2 amenable:1 implication:1 integral:2 edge:9 partial:1 daily:1 modest:2 machinery:3 tree:9 orthogonal:1 institut:1 desired:2 delete:1 instance:11 increased:1 modeling:10 earlier:1 bedford:1 assignment:1 leurs:1 hundred:1 uniform:2 reported:1 dependency:5 combined:1 density:63 international:1 huji:1 retain:2 probabilistic:3 off:2 together:1 parzen:1 connectivity:1 central:3 management:2 huang:1 positivity:1 conf:1 derivative:5 leading:2 matos:1 suggesting:1 de:5 coefficient:1 depends:2 kendall:2 doing:1 red:1 start:4 parallel:1 contribution:1 ass:1 il:1 variance:1 characteristic:1 kaufmann:1 correspond:1 bayesian:14 marginally:1 definition:2 archimedean:5 competitor:2 frequency:1 universite:1 proof:2 associated:3 erlangen:1 gain:3 proved:1 dataset:7 recall:2 dimensionality:2 actually:1 appears:3 manuscript:1 higher:5 day:2 follow:1 restarts:1 modal:1 improved:1 methodology:1 generality:1 furthermore:2 correlation:6 working:2 dow:5 expressive:2 assessment:1 overlapping:1 lack:2 defines:3 mode:1 quality:2 reveal:1 scientific:1 grows:1 building:3 facilitate:1 effect:2 true:5 excluded:1 galel:1 skewed:1 during:1 inferior:1 rooted:2 criterion:1 bal:1 complete:1 demonstrate:3 climatology:1 reasoning:1 ranging:3 consideration:1 novel:4 recently:3 common:1 sigmoid:10 superior:3 physical:9 conditioning:1 discussed:1 slight:1 relating:1 marginals:28 theirs:1 significant:4 composition:5 measurement:2 ai:1 fonctions:1 reef:1 meila:1 pm:2 similarly:1 mathematics:1 illinois:1 mainstream:1 add:1 multivariate:25 recent:1 perspective:3 pai1:3 reverse:1 scenario:2 life:3 garch:1 morgan:1 somewhat:2 kock:1 elidan:3 semi:1 full:3 mix:4 multiple:1 technical:4 match:3 academic:1 england:1 offer:5 compensate:1 physicochemical:1 equally:1 neuro:1 variant:1 basic:1 denominator:2 normalization:1 kernel:12 tailored:2 addition:1 remarkably:1 fellowship:1 semiparametric:1 markedly:3 comment:1 elegant:1 facilitates:3 undirected:2 alfonso:2 lafferty:1 effectiveness:1 jordan:1 leverage:1 door:2 split:1 variety:1 independence:6 bic:1 translates:2 motivated:1 dramatically:1 useful:3 generally:2 clear:2 involve:1 nonparametric:2 ten:1 simplest:1 canonical:1 shifted:1 estimated:3 overly:1 per:2 threefold:1 independency:4 four:1 reliance:1 cerdeira:1 achieving:1 imputation:1 clarity:2 neither:1 utilize:1 graph:13 worrying:1 mcneil:2 fraction:1 year:1 sum:1 run:1 parameterized:4 letter:1 everywhere:1 uncertainty:3 almost:5 family:2 utilizes:1 coherence:1 decision:2 bit:3 simplification:1 fold:3 paramount:1 adapted:1 constraint:1 constrain:1 ri:1 encodes:4 u1:5 aspect:1 argument:2 innovative:1 bns:10 performing:1 relatively:3 structured:2 department:2 according:7 conjugate:1 smaller:3 slightly:1 across:1 making:2 modification:1 ariel:1 heart:1 agree:1 previously:1 turn:2 discus:2 needed:1 ordinal:1 merit:1 bakken:1 gaussians:2 hierarchical:3 fxi:1 appropriate:1 lindskog:2 robustly:1 alternative:2 hampered:1 top:1 running:1 publishing:1 graphical:4 maintaining:1 household:1 build:1 objective:1 question:1 added:1 koehler:1 parametric:4 dependence:9 diagonal:3 gradient:1 fabio:1 link:2 separate:2 majority:1 parametrized:1 accommodating:1 fy:1 induction:1 assuming:1 index:1 relationship:2 ratio:1 demonstration:1 hebrew:1 difficult:3 setup:1 statement:2 frank:6 rise:1 motivates:1 perform:1 allowing:1 datasets:5 markov:1 finite:1 petroleum:1 y1:28 varied:2 arbitrary:1 community:2 expressiveness:1 introduced:1 inverting:1 pair:3 blackwell:1 paris:1 connection:1 crime:10 learned:8 pearl:1 nip:2 address:1 able:1 suggested:1 bar:1 below:1 appeared:1 challenge:1 tau:2 belief:2 power:1 difficulty:2 rely:2 residual:5 representing:5 alcohol:5 imply:1 numerous:2 axis:8 created:1 ready:1 columbia:1 tresp:1 nir:1 faced:1 literature:3 acknowledgement:1 relative:3 stetter:1 fully:4 expect:1 interesting:1 limitation:1 acyclic:3 consistent:2 principle:1 editor:1 heavy:2 cooke:3 prone:1 qf:1 surprisingly:1 supported:1 unfavorably:1 free:1 allow:2 understand:2 taking:3 sparse:1 czado:1 distributed:2 benefit:1 overcome:1 dimension:11 xn:19 valid:5 cumulative:6 autoregressive:1 sensory:1 concretely:1 commonly:1 nguyen:1 far:1 cope:1 approximate:1 emphasize:1 olsen:1 ances:1 global:2 uai:3 handbook:1 assumed:1 xi:38 continuous:8 un:5 search:5 triplet:1 tailed:2 why:1 decomposes:5 learn:2 reasonably:1 nature:1 composing:1 inherently:1 complex:4 constructing:5 domain:10 jaimovich:1 main:1 child:2 allowed:3 x1:18 xu:1 elaborate:3 explicit:3 third:1 theorem:9 companion:2 saturate:1 british:1 specific:3 explored:1 list:1 concern:1 bivariate:7 exists:4 joe:1 importance:2 ci:5 conditioned:1 margin:1 gap:1 easier:2 generalizing:1 simply:1 univariate:21 likely:2 expressed:2 schwaighofer:1 joined:1 springer:1 aa:3 relies:1 conditional:16 goal:3 shared:2 change:1 included:1 specifically:1 uniformly:2 reversing:1 lemma:3 called:1 experimental:1 attempted:1 formally:1 embrechts:2 almeida:1 latter:1 support:1 repartition:1 constructive:3 marge:1 avoiding:1 correlated:1
3,264
3,957
Penalized Principal Component Regression on Graphs for Analysis of Subnetworks George Michailidis Department of Statistics and EECS University of Michigan Ann Arbor, MI 48109 [email protected] Ali Shojaie Department of Statistics University of Michigan Ann Arbor, MI 48109 [email protected] Abstract Network models are widely used to capture interactions among component of complex systems, such as social and biological. To understand their behavior, it is often necessary to analyze functionally related components of the system, corresponding to subsystems. Therefore, the analysis of subnetworks may provide additional insight into the behavior of the system, not evident from individual components. We propose a novel approach for incorporating available network information into the analysis of arbitrary subnetworks. The proposed method offers an efficient dimension reduction strategy using Laplacian eigenmaps with Neumann boundary conditions, and provides a flexible inference framework for analysis of subnetworks, based on a group-penalized principal component regression model on graphs. Asymptotic properties of the proposed inference method, as well as the choice of the tuning parameter for control of the false positive rate are discussed in high dimensional settings. The performance of the proposed methodology is illustrated using simulated and real data examples from biology. 1 Introduction Simultaneous analysis of groups of system components with similar functions, or subsystems, has recently received considerable attention. This problem is of particular interest in high dimensional biological applications, where changes in individual components may not reveal the underlying biological phenomenon, whereas the combined effect of functionally related components could improve the efficiency and interpretability of results. This idea has motivated the method of gene set enrichment analysis (GSEA), along with a number of related methods [1, 2]. The main premise of this method is that by assessing the significance of sets rather than individual components (i.e. genes), interactions among them can be preserved, and more efficient inference methods can be developed. A different class of models (see e.g. [3, 4] and references therein) has focused on directly incorporating the network information in order to achieve better efficiency in assessing the significance of individual components. These ideas have been combined in [5, 6], by introducing a model for incorporating the regulatory gene network, and developing an inference framework for analysis of subnetworks defined by biological pathways. In this frameworks, called NetGSA, a global model is introduced with parameters 1 for individual genes/proteins, and the parameters are then combined appropriately in order to assess the significance of biological pathways. However, the main challenge in applying NetGSA in realworld biological applications is the extensive computational time. In addition, the total number of parameters allowed in the model are limited by the available sample size n (see Section 5). In this paper, we propose a dimension reduction technique for networks, based on Laplacian eigenmaps, with the goal of providing an optimal low-dimensional projection for the space of random variables in each subnetwork. We then propose a general inference framework for analysis of subnetworks by reformulating the inference problem as a penalized principal regression problem on the graph. In Section 2, we review the Laplacian eigenmaps and establish their connection to principal component analysis (PCA) for random variables on a graph. Inference for significance of subnetworks is discussed in Section 3, where we introduce Laplacian eigenmaps with Neumann boundary conditions and present the group-penalized principal component regression framework for analysis of arbitrary subnetworks. Results of applying the new methodology to simulated and real data examples are presented in Section 4, and the results are summarized in Section 5. 2 Laplacian Eigenmaps Consider p random variables Xi , i = 1, . . . , p (e.g. expression values of genes) defined on nodes of an undirected (weighted) graph G = (V, E). Here V is the set of nodes of G and E ? V ?V its edge set. Throughout this paper, we represent the edge set and the strength of associations among nodes through the adjacency matrix of the graph A. Specifically, Ai j ? 0 and i and j are adjacent if the Ai j (and hence A ji ) is non-zero. In this case we write i ? j. Finally, we denote the observed values of the random variables by the n ? p data matrix X. The subnetworks of interest are defined based on additional knowledge about their attributes and functions. In biological applications, these subnetworks are defined by common biological function, co-regulation or chromosomal location. The objective of the current paper is to develop dimension reduction methods on networks, in order to assess the significance of a priori defined subnetworks (e.g. biological pathways) with minimal information loss. 2.1 Graph Laplacian and Eigenmaps Laplacian eigenmaps are defined using the eigenfunctions of the graph Laplacian, which is commonly used in spectral graph theory, computer science and image processing. Applications based on Laplacian eigenmaps include image segmentation and the normalized cut algorithm of [7], spectral clustering [8, 9] and collaborative filtering [10]. The Laplacian matrix and its eigenvectors have also been used in biological applications. For example, in [11], the Laplacian matrix has been used to define a network-penalty for variable selection on graphs, and the interpretation of Laplacian eigenmaps as a Fourier basis was exploited in [12] to propose supervised and unsupervised classification methods. Different definitions and representations have been proposed for the spectrum of a graph, and the results may vary depending on the definition of the Laplacian matrix (see [13] for a review). Here, we follow the notation in [13], and consider the normalized Laplacian matrix of the graph. To that end, let D denote the diagonal degree matrix for A, i.e. Dii = ? j Ai j ? di , and define the Laplacian matrix of the graph by L = D?1/2 (D ? A)D?1/2 , or alternatively ? Ajj ? j = i, d j 6= 0 ? 1 ? dj ? Ai j Li j = ?? j?i ? di d j ? ? 0 o.w. 2 It can be shown that [13] L is positive semidefinite with eigenvalues 0 = ?0 ? ?1 ? . . . ? ? p?1 ? 2. Its eigenfunctions are known as the spectrum of G , and optimize the Rayleigh quotient hg, L gi ?i? j ( f (i) ? f ( j))2 , = hg, gi ? j f ( j)2 d j (1) It can be seen from (1), that the 0-eigenvalue of L is g = D1/2 1, corresponding to the average over the graph G . The first non-zero eigenvalue ?1 is the harmonic eigenfunction of L , which corresponds to the Laplace-Beltrami operator on Reimannian manifolds, and is given by ? j?i ( f (i) ? f ( j))2 f ?D1 ? j f ( j)2 d j ?1 = inf More generally, denoting by Ck?1 the projection to the subspace of the first k ? 1 eigenfunctions, ?k = 2.2 ? j?i ( f (i) ? f ( j))2 . f ?DCk?1 ? j f ( j)2 d j inf Principal Component Analysis on Graphs Previous applications of the graph Laplacian and its spectrum often focus on the properties of the graph; however, the connection to the probability distribution of the random variables on nodes of the graph has not been strongly emphasized. In graphical models, the undirected graph G among random variables corresponds naturally to a Markov random field [14]. The following result establishes the relationship between the Laplacian eigenmaps and the principal components of the random variables defined on the nodes of the graph, in case of Gaussian observations. Lemma 1. Let X = (X1 , . . . , X p ) be random variables defined on the nodes of graph G = (V, E) and denote by L and L + the Laplacian matrix of G and its Moore-Penrose generalized inverse. If X ? N(0, ?), then L and L + correspond to ? and ?, respectively (? ? ??1 ). In addition, let ?0 , . . . , ? p?1 denote the eigenfunctions corresponding to eigenvalues of L . Then ?0 , . . . , ? p?1 are the principal components of X, with ?0 corresponding to the leading principal component. Proof. For Gaussian random variables, the inverse covariance (or precision) matrix has the same non-zero pattern as the adjacency matrix of the graph, i.e. for i 6= j, ?i j = 0 iff Ai j = 0. Moreover, ?ii = ?i?2 , where ?i2 is the partial variance of Xi (see e.g. [15]). However, using the conditional autoregression (CAR) representation of Gaussian Markov random fields [16], we can write E(Xi |X?i ) = ? ci j X j (2) j?i where ?i ? {1 . . . p}\i and C = [ci j ] has the same non-zero pattern as the adjacency matrix of the graph A, and amounts to a proper probability  distribution for X. In particular, by Brook?s Lemma [16] it follows from (2) that fX (x) ? exp ?1/2xT (0, T ?1 (I p ?C))x , where T = diag[?i2 ]. Therefore, ? = T ?1 (I p ?C) and hence (I p ?C) should be PD. However, since L = I p ?D?1/2 AD?1/2 is PSD, we can set C = D?1/2 AD?1/2 ?? I for any ? > 0. In other words, (I p ?C) = L + ? I p , which implies that L? ? L + ? I p = T ?, and hence L? ?1 = ?T ?1 . Taking limit as ? ? 0, it follows that L and L + correspond to ? and ?, respectively. The second part follows directly from the above connection between L? ?1 and ?. In particular, suppose, without loss of generality, that ?i2 = 1. Then, it is easily seen that the principal components of X are given by eigenfunctions of L? ?1 , which are in turn equal to the eigenfunctions of L? with the ordering of the eigenvalues reversed. However, since eigenfunctions of L + ? I p and L are equal, the principal components of X are obtained from eigenfunctions of L . 3 X1 ?1 X2 ?2 X3 Figure 1: Left: A simple subnetwork of interest, marked with the dotted circle. Right: Illustration of the Neumann random walk, the dotted curve indicates the boundary of the subnetwork. Remark 2. An alternative justification for the above result, for general probability distributions defined on graphs, can be given by assuming that the graph represents ?similarities? among random variables and using an optimal embedding of graph G in a lower dimensional Euclidean space1 . In the case of one dimensional embedding, the goal is to find an embedding v = (v1 , . . . , v p )T that preserves the distances among the nodes of the graph. The objective function of the embedding problem is then given by Q = ?i, j (vi ? v j )2 Ai j , or alternatively Q = 2vT (D ? A)v [17]. Thus, the optimal embedding is found by solving argminvT Dv=1 vT (D ? A)v. Setting u = D1/2 v, this is solved by finding the eigenvector corresponding to the smallest eigenvalue of L . Lemma 1 provides an efficient dimension reduction framework that summarizes the information in the entire network into few feature vectors. Although the resulting dimension reduction method can be used efficiently in classification (as in [12]), the eigenfunctions of G do not provide any information about significance of arbitrary subnetworks, and therefore cannot be used to analyze the changes in subnetworks. In the next section, we introduce a restricted version of Laplacian eigenmaps, and discuss the problem of analysis of subnetworks. 3 Analysis of Subnetworks and PCR on Graph (GPCR) In [5], the authors argue that to analyze the effect of subnetworks, the test statistic needs to represent the pure effect of the subnetwork, without being influenced by external nodes, and propose an inference procedure based on mixed linear models to achieve this goal. However, in order to achieve dimension reduction, we need a method that only incorporates local information at the level of each subnetwork, and possibly its neighbors (see the left panel of Figure 1). Using the connection of the Laplace operator in Reimannian manifolds to heat flow (see e.g. [17]), the problem of analysis of arbitrary subnetworks can be reformulated as a heat equation with boundary conditions. It then follows that in order to assess the ?effect? of each subnetwork, the appropriate boundary conditions should block the flow of heat at the boundary of the set. This corresponds to insulating the boundary, also known as the Neumann boundary condition. For the general heat equation ?(v, x), this boundary condition is given by ?? ?v (x) = 0 at each boundary point x, where v is the normal direction orthogonal to the tangent hyperplane at x. The eigenvalues of subgraphs with boundary conditions are studied in [13]. In particular, let S be any (connected) subnetwork of G , and denote by ? S the boundary of S in G . The Neumann boundary condition states that for every x ? ? S, ?y:{x,y}?? S ( f (x) ? f (y)) = 0. The Neumann eigenfunctions of S are then the optimizers of the restricted Rayleigh quotient ?S,i = inf sup f g?Ci?1 ?{t,u}?S?? S ( f (t) ? f (u))2 2 ?t?S ( f (t) ? g(t)) dt where Ci?1 is the projection to the space of previous eigenfunctions. 1 For unweighted graphs, this justification was given by [17], using the unnormlized Laplacian matrix. 4 In [13], a connection between the Neumann boundary conditions and a reflected random walk on the graph is established, and it is shown that the Neumann eigenvectors can be alternatively calculated from the eigenvectors of the transition probability matrix of this reflected random walk, also known as the Neumann random walk (see [13] for additional details). Here, we generalize this idea to weighted adjacency matrices. Let P? and P denote the transition probability matrix of the reflected random walk, and the original random walk defined on G , respectively. Noting that P = D?1 A, we can extend the results in [13] as follows. For the general case of weighted graphs, define the transition probability matrix of the reflected random walk by ? j ? i, i, j ? S ? Pi j A A Pi j + dik d 0k j j ? k ? i, k ? /S P?i j = (3) i k ? 0 o.w. where dk0 = ?i?k,i?S Aki denotes the degree of the node k in S. Then, the Neumann eigenvalues are ? given by ?i = 1 ? ?i , where ?i is the ith eigenvalue of P. Remark 3. The connection with the Neumann random walk also sheds light into the effect of the proposed boundary condition on the joint probability distribution of the random variables on the graph. To illustrate this, consider the simple graph in the right panel of Figure 1. For the moment, suppose that the random variables X1 , X2 , X3 are Gaussian, and the edges from X1 and X2 to X3 are directed. As discussed in [5], the joint probability distribution of the random variables on the graph is then given by linear structural equation models: X1 X2 X3 = ?1 = ?2 = ?1 X1 + ?1 X2 ? Y = ??, ?= 1 0 ?1 0 1 ?2 0 0 1 ! Then, the conditional probability distribution of X1 and X2 given X3 , is Gaussian, with the inverse covariance matrix given by   1 + ?12 ?1 ?2 (4) ?1 ?2 1 + ?22 A comparison between (3) and (4) then reveals that the proposed Neumann random walk corresponds to conditioning on the boundary variables, if the edges going from the set S to its boundary are directed. The reflected random walk, for the original problem, therefore corresponds to first setting all the influences from other nodes in the graph to nodes in the set S to zero (resulting in directed edges) and then conditioning on the boundary variables. Therefore, the proposed method offers a compromise compared to the full model of [5], based on local information at the level of each subnetwork. 3.1 Group-Penalized PCR on Graph Using the Neumann eigenvectors of subnetworks, we now define a principal component regression on graphs, which can be used to analyze the significance of subnetworks. Let N j denote the |S j | ? m j matrix of the m j smallest Neumann eigenfunctions for subgraph S j . Also, let X ( j) be the n ? |S j | matrix of observations for the j-th subnetwork. An m j -dimensional projection of the original data matrix X ( j) is then given by X? ( j) = X ( j) N j . Different methods can be used in order to determine the number of eigenfunctions m j for each subnetwork. A simple procedure determines a predefined threshold for the proportion of variance explained by each eigenfunction. These proportions can be determined by considering the reciprocal of Neumann eigenvalues (ignoring the 0-eigenvalue). To simplify the presentation, here we assume m j = m, ? j. 5 The significance of subnetwork S j is a function of the combined effect of all the nodes, captured by the transformed data matrix X? ( j) . This can be evaluated by forming a multivariate ANOVA (MANOVA) model. Formally, let y be the mn ? 1 vector of observations obtained by stacking all the transformed data matrices X? ( j) . Also, let X be the mn ? Jmr design matrix corresponding to the experimental settings, where r is the number of parameters used to model experimental conditions, and ? be the vector of regression coefficients. For simplicity, here we focus on the case of a twoclass inference problem (e.g. treatment vs. control). Extensions to more general experimental settings follow naturally and are discussed in Section 5. To evaluate the combined effect of each subnetwork, we impose a group penalty on the coefficient of the regression of y on the design matrix X . In particular, using the group lasso penalty [18], we estimate the significance of the subnetwork by solving the following optimization problem2 ( ) J argmin n?1 ky ? ? X ( j) ? ( j) k22 + ? ? j=1 J ? k? ( j) k2 (5) j=1 where J is the total number of subnetworks considered and X ( j) and ? ( j) denote the columns of X , and entries of ? corresponding to the subnetwork j, respectively. In equation (5), ? is the tuning parameter and is usually determined by performing k-fold cross validation or evaluation on independent data sets. However, since the goal of our analysis is to determine the significance of subnetworks, ? should be determined so that the probability of false positives is controlled at a given significance level ?. Here we adapt the approach in [20] and determine the optimal value of ? so that the family-wise error rate (FWER) in repeated sampling with replacement (bootstrap) is controlled at the level ?. Specifically, let qi? be the total number of subnetworks considered significant based on the value of ? in the ith bootstrap sample. Let ? be the threshold for ( j) selection of variables as significant. In other words, if Pi is the probability of selecting the coefficients corresponding to subnetwork j in the ith bootstrap sample, the subnetwork j is considered p ( j) significant if max? Pi ? ?. Using this method, we select ? such that qi? = (2? ? 1)? p.3 The following result shows that the proposed methodology correctly selects the significant subnetworks, while controlling FWER at level ?. We begin by introducing some additional notations and assumptions. We assume the columns of design matrix X are normalized so that n?1 Xi T Xi = 1, Throughout this paper, we consider the case where the total number of nodes in the graph p, and the number of design parameters r are allowed to diverge (the p  n setting). In addition, let s be the total number of non-zero elements in the true regression vector ? . Theorem 4. Suppose that m, n ? 1 and there exists ? ? 1 and t ? s ? 1 such that n?1 X T Xi j ? (7?t)?1 for all i 6= j. Also suppose that for j 6= k, the transformed random variables X? ( j) and X? (k) p are independent. If the tuning parameter ? is selected such that such that q? = (2? ? 1)?rp, (i) there exists ? = ? (n, p) > 0 such that ? ? 0 as n ? ? and with probability at least 1 ? ? the significant subnetworks are correctly selected with high probability, (ii) the family-wise error rate is controlled at the level ?. Outline of the Proof. First note that the MANOVA model presented above can be reformulated as a multi-task learning problem [21]. Upon establishing the fact that for the proposed tuning parameter p ? ? log p/(nm3/2 ), it follows from the results in [22] that for each bootstrap sample, there exists ? = ?(n) > 0 such that with probability at least 1 ? (rp)?? the significant subnetworks are correctly selected. Thus if ? ? 1?(rp)?? , the coefficients for significant subnetworks are included in the final 2 The problem in (5) can be solved using the R-package grplasso [19]. details for this method are given in [20], but are excluded here due to space limitations. 3 Additional 6 ? model with hight probability. In particular, it can be shown that ? = ?{ B(1 ? (rp)?? ? ?)/2}, where B is the number of bootstrap samples and ? is the cumulative normal distribution. This proves the first claim. Next, note that the normality assumption, and the fact that the eigenfunctions within each sub( j) network are orthogonal, imply that for each j, X?i , i = 1, . . . , m are independent. Moreover, the assumption of independence of X? ( j) and X? (k) for j 6= k implies that the values of y are independent realizations of i.i.d standard normal random variables. On the other hand, the KarushKuhnTucker conditions for the optimization problem in (5) imply that ? ( j) 6= 0 iff (nm)(?1) h(y ? X ? ), X ( j) i = sgn (?? ( j) )?, where hx, yi denotes their inner product. It is hence clear that 1[? ( j) 6=0] are exchangeable. Combining this with the first part of the theorem, the claim follows from Theorem 1 of [20]. Remark 5. The main assumption of Theorem 4 is the independence of the variables in different subnetworks. Although this is not satisfied in general problems, it may be satisfied by the conditioning argument of Remark 3. It is possible to further relax this assumption using an argument similar to Theorem 2 of [20], but we do not pursue this here. 4 Experiments We illustrate the performance of the proposed method using simulated data motivated by biological applications, as well as a real data application based on gene expression analysis. In the simulation, we generate a small network of 80 nodes (genes), with 8 subnetworks. The random variables (expression levels of genes) are generated according to a normal distribution with mean ?. Under the null hypothesis, ?null = 1 and the association weight ? for all edges of the network is set to 0.2. The setting of parameters under the alternative hypothesis are given in Table 1, where ?alt = 3. These settings are illustrated in the left panel of Figure 2. Table 1 also includes the estimated powers of the tests for subnetworks based on 200 simulations with n = 50 observations. It can be seen that the proposed GPCR method offers improvements over GSEA [1], especially in case of subnetworks 3 and 6. However, it results in a less accurate inference compared to NetGSA [5]. In [5], the pathways involved in Galactose utilization in yeast were analyzed based on the data from [23], and the performances of the NetGSA and GSEA methods were compared. The interactions among genes, along with significance of individual genes (based on single gene analysis) are given in the right panel of Figure 2, and the results of significance analysis based on NetGSA, GSEA and the proposed GPCR are given in Table 2. As in the simulated example, the results of this analysis indicate that GPCR results in improved efficiency over GSEA, while failing to detect the significance of some of the pathways detected by NetGSA. 5 Conclusion We proposed a principal component regression method for graphs, called GPCR, using Laplacian eigenmaps with Neumann boundary conditions. The proposed method offers a systematic approach Table 1: Parameter settings under the alternative and estimated powers for the simulation study. Subnet 1 2 3 4 Parameter Setting % ?alt ? 0.05 0.2 0.20 0.2 0.50 0.2 0.80 0.2 Estimated Powers NetGSA GPCR GSEA 0.02 0.08 0.01 0.03 0.21 0.02 1.00 0.65 0.27 1.00 0.81 0.90 Subnet 5 6 7 8 7 Parameter Setting % ?alt ? 0.05 0.6 0.20 0.6 0.50 0.6 0.80 0.6 Estimated Powers NetGSA GPCR GSEA 0.94 0.41 0.12 1.00 0.61 0.15 1.00 0.99 0.97 1.00 0.99 1.00 Figure 2: Left: Setting of the simulation parameters under the alternative hypothesis. Right: Network of yeast genes involved in Galactose utilization. for dimension reduction in networks, with a priori defined subnetworks of interest. It can also incorporate both weighted and unweighted adjacency matrices and can be easily extended to analyzing complex experimental conditions through the framework of linear models. This method can also be used in longitudinal and time-course studies. Our simulation studies, and the real data example indicate that the proposed GPCR method offers significant improvements over the methods of gene set enrichment analysis (GSEA). However, it does not achieve optimal powers in comparison to NetGSA. This difference in power may be attributable to the mechanism of incorporating the network information in the two methods: while NetGSA incorporates the full network information, GPCR only account for local network information, at the level of each subnetwork, and restricts the interactions with the rest of the network based on the Neumann boundary condition. However, the most computationally involved step in NetGSA requires O(p3 ) operation, whereas the computational cost of GPCR is O(m3 ). It is clear that since m  p in most applications, GPCR could result in significant improvement in terms of computational time and memory requirements for analysis of high dimensional networks. In addition, NetGSA requires that r < n, whilst the dimension reduction and the penalization of the proposed GPCR removes the need for any such restriction and facilitates the analysis of complex experiments in the settings with small sample sizes. Acknowledgments Funding for this work was provided by NIH grants 1RC1CA145444-0110 and 5R01LM010138-02. Table 2: Significance of pathways in Galactose utilization. PATHWAY rProtein Synthesis Glycolytic Enzymes RNA Processing Fatty Acid Oxidation O2 Stress Mating, Cell Cycle Vesicular Transport Amino Acid Synthesis Size 28 16 75 7 13 58 19 30 NetGSA X GPCR X X GSEA PATHWAY Sugar Transport Glycogen Metabolism Stress Metal Uptake Respiration Gluconeogenesis Galactose Utilization 8 Size 2 12 12 4 9 7 12 NetGSA GPCR X X GSEA X X X X References [1] A. Subramanian, P. Tamayo, V.K. Mootha, S. Mukherjee, B.L. Ebert, M.A. Gillette, A. Paulovich, S.L. Pomeroy, T.R. Golub, E.S. Lander, et al. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proceedings of the National Academy of Sciences, 102(43):15545?15550, 2005. [2] B. Efron and R. Tibshirani. On testing the significance of sets of genes. Annals of Applied Statistics, 1(1):107?129, 2007. [3] T. Ideker, O. Ozier, B. Schwikowski, and A.F. Siegel. Discovering regulatory and signalling circuits in molecular interaction networks. Bioinformatics, 18(1):S233?S240, 2002. [4] Zhi Wei and Li Hongzhe. A markov random field model for network-based analysis of genomic data. Bioinformatics, 2007. [5] A. Shojaie and G. Michailidis. Analysis of gene sets based on the underlying regulatory network. Journal of Computational Biology, 16(3):407?426, 2009. [6] A. Shojaie and G. Michailidis. Network enrichment analysis in complex experiments. Statisitcal Applications in Genetics and Molecular Biology, 9(1), Article 22, 2010. [7] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888?905, 2000. [8] M. Saerens, F. Fouss, L. Yen, and P. Dupont. The principal components analysis of a graph, and its relationships to spectral clustering. Machine Learning: ECML 2004, pages 371?383, 2004. [9] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2:849?856, 2002. [10] F. Fouss, A. Pirotte, J.M. Renders, and M. Saerens. A novel way of computing dissimilarities between nodes of a graph, with application to collaborative filtering and subspace projection of the graph nodes. In European Conference on Machine Learning Proceedings, ECML, 2004. [11] C. Li and H. Li. Variable Selection and Regression Analysis for Graph-Structured Covariates with an Application to Genomics. Annals of Applied Statistics, in press, 2010. [12] F. Rapaport, A. Zinovyev, M. Dutreix, E. Barillot, and J.P. Vert. Classification of microarray data using gene networks. BMC bioinformatics, 8(1):35, 2007. [13] F.R.K. Chung. Spectral graph theory. American Mathematical Society, 1997. [14] S.L. Lauritzen. Graphical models. Oxford Univ Press, 1996. [15] H. Rue and L. Held. Gaussian Markov random fields: theory and applications. Chapman & Hall, 2005. [16] J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society. Series B (Methodological), 36(2):192?236, 1974. [17] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. Advances in neural information processing systems, 1:585?592, 2002. [18] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of Royal Statistical Society. Series B Statistical Methodology, 68(1):49, 2006. [19] L. Meier, S. Van de Geer, and P. Buhlmann. The group lasso for logistic regression. Journal of Royal Statistical Society. Series B Statistical Methodology, 70(1):53, 2008. [20] N. Meinshausen and P. B?uhlmann. Stability selection. Preprint, arXiv, 809, 2009. [21] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243?272, 2008. [22] K. Lounici, M. Pontil, A.B. Tsybakov, and S. van de Geer. Taking Advantage of Sparsity in Multi-Task Learning. Preprint, arXiv, 903, 2009. [23] T. Ideker, V. Thorsson, J.A. Ranish, R. Christmas, J. Buhler, J.K. Eng, R. Bumgarner, D.R. Goodlett, R. Aebersold, and L. Hood. Integrated genomic and proteomic analyses of a systematically perturbed metabolic network. Science, 292(5518):929, 2001. 9
3957 |@word version:1 proportion:2 tamayo:1 ajj:1 simulation:5 covariance:2 eng:1 moment:1 reduction:8 series:3 selecting:1 denoting:1 longitudinal:1 o2:1 current:1 dupont:1 remove:1 v:1 intelligence:1 selected:3 metabolism:1 discovering:1 signalling:1 ith:3 reciprocal:1 provides:2 node:16 location:1 mathematical:1 along:2 yuan:1 pathway:8 introduce:2 behavior:2 multi:3 zhi:1 considering:1 begin:1 provided:1 underlying:2 notation:2 moreover:2 panel:4 circuit:1 null:2 argmin:1 pursue:1 eigenvector:1 fatty:1 developed:1 whilst:1 finding:1 every:1 shed:1 k2:1 control:2 exchangeable:1 utilization:4 grant:1 positive:3 local:3 limit:1 analyzing:1 oxford:1 establishing:1 therein:1 studied:1 meinshausen:1 co:1 limited:1 directed:3 acknowledgment:1 hood:1 testing:1 block:1 x3:5 optimizers:1 bootstrap:5 procedure:2 pontil:2 vert:1 projection:5 goodlett:1 word:2 hight:1 protein:1 cannot:1 subsystem:2 selection:5 operator:2 applying:2 influence:1 optimize:1 restriction:1 shi:1 attention:1 convex:1 focused:1 simplicity:1 pure:1 subgraphs:1 insight:1 d1:3 reimannian:2 embedding:6 stability:1 fx:1 justification:2 laplace:2 annals:2 controlling:1 suppose:4 hypothesis:3 element:1 cut:2 mukherjee:1 hongzhe:1 observed:1 preprint:2 solved:2 capture:1 connected:1 cycle:1 ordering:1 pd:1 sugar:1 covariates:1 solving:2 ali:1 compromise:1 upon:1 efficiency:3 basis:1 easily:2 joint:2 univ:1 heat:4 detected:1 widely:1 relax:1 statistic:5 gi:2 niyogi:1 final:1 advantage:1 eigenvalue:11 propose:5 interaction:6 product:1 combining:1 realization:1 iff:2 subgraph:1 achieve:4 academy:1 aebersold:1 ky:1 requirement:1 neumann:17 assessing:2 depending:1 develop:1 illustrate:2 lauritzen:1 received:1 quotient:2 implies:2 indicate:2 mootha:1 direction:1 beltrami:1 fouss:2 proteomic:1 attribute:1 sgn:1 dii:1 adjacency:5 subnet:2 premise:1 hx:1 biological:11 extension:1 considered:3 hall:1 normal:4 exp:1 claim:2 vary:1 smallest:2 failing:1 estimation:1 uhlmann:1 grouped:1 establishes:1 weighted:4 genomic:2 gaussian:6 rna:1 vesicular:1 rather:1 ck:1 dck:1 focus:2 improvement:3 methodological:1 indicates:1 besag:1 detect:1 inference:10 entire:1 integrated:1 going:1 transformed:3 selects:1 among:7 flexible:1 classification:3 priori:2 paulovich:1 spatial:1 field:4 equal:2 evgeniou:1 ng:1 sampling:1 chapman:1 biology:3 represents:1 bmc:1 unsupervised:1 simplify:1 few:1 belkin:1 preserve:1 national:1 individual:6 replacement:1 psd:1 interest:4 galactose:4 evaluation:1 golub:1 analyzed:1 semidefinite:1 light:1 hg:2 held:1 predefined:1 accurate:1 edge:6 partial:1 necessary:1 orthogonal:2 euclidean:1 walk:10 circle:1 minimal:1 column:2 chromosomal:1 lattice:1 stacking:1 introducing:2 cost:1 entry:1 eigenmaps:13 perturbed:1 eec:1 combined:5 systematic:1 diverge:1 synthesis:2 nm:1 satisfied:2 possibly:1 external:1 american:1 chung:1 leading:1 li:4 account:1 de:2 summarized:1 includes:1 coefficient:4 ad:2 vi:1 analyze:4 sup:1 yen:1 collaborative:2 ass:3 variance:2 acid:2 efficiently:1 correspond:2 gpcr:14 generalize:1 simultaneous:1 influenced:1 definition:2 mating:1 involved:3 naturally:2 proof:2 mi:2 di:2 treatment:1 knowledge:2 car:1 efron:1 segmentation:2 dt:1 supervised:1 follow:2 methodology:5 reflected:5 improved:1 gillette:1 wei:2 evaluated:1 lounici:1 strongly:1 generality:1 hand:1 transport:2 logistic:1 reveal:1 yeast:2 effect:7 k22:1 normalized:4 true:1 hence:4 reformulating:1 excluded:1 moore:1 i2:3 illustrated:2 adjacent:1 aki:1 fwer:2 generalized:1 stress:2 evident:1 outline:1 saerens:2 interpreting:1 image:3 harmonic:1 wise:2 novel:2 recently:1 funding:1 nih:1 common:1 ji:1 conditioning:3 discussed:4 association:2 interpretation:1 extend:1 functionally:2 significant:9 respiration:1 ai:6 tuning:4 dj:1 similarity:1 enzyme:1 multivariate:1 inf:3 vt:2 yi:1 exploited:1 seen:3 captured:1 george:1 additional:5 impose:1 determine:3 ii:2 full:2 adapt:1 offer:5 cross:1 lin:1 molecular:2 laplacian:22 controlled:3 qi:2 regression:12 arxiv:2 represent:2 cell:1 preserved:1 whereas:2 addition:4 lander:1 microarray:1 appropriately:1 rest:1 eigenfunctions:14 undirected:2 facilitates:1 incorporates:2 flow:2 jordan:1 structural:1 noting:1 independence:2 michailidis:3 lasso:2 inner:1 idea:3 twoclass:1 motivated:2 pca:1 expression:4 penalty:3 dik:1 render:1 reformulated:2 remark:4 generally:1 clear:2 eigenvectors:4 amount:1 tsybakov:1 generate:1 restricts:1 dotted:2 estimated:4 correctly:3 tibshirani:1 bumgarner:1 write:2 group:7 threshold:2 anova:1 v1:1 buhler:1 graph:44 realworld:1 inverse:3 package:1 throughout:2 family:2 p3:1 summarizes:1 fold:1 strength:1 x2:6 fourier:1 argument:2 performing:1 department:2 developing:1 according:1 structured:1 dv:1 restricted:2 explained:1 computationally:1 equation:4 turn:1 discus:1 mechanism:1 end:1 subnetworks:31 umich:2 available:2 autoregression:1 operation:1 spectral:6 appropriate:1 alternative:4 rp:4 original:3 denotes:2 clustering:4 include:1 graphical:2 prof:1 establish:1 especially:1 society:4 objective:2 malik:1 strategy:1 diagonal:1 subnetwork:17 subspace:2 reversed:1 distance:1 simulated:4 pirotte:1 manifold:2 argue:1 assuming:1 relationship:2 illustration:1 providing:1 regulation:1 insulating:1 design:4 proper:1 observation:4 ideker:2 markov:4 ecml:2 extended:1 arbitrary:4 buhlmann:1 enrichment:4 introduced:1 meier:1 extensive:1 connection:6 established:1 eigenfunction:2 brook:1 usually:1 pattern:3 sparsity:1 challenge:1 interpretability:1 pcr:2 max:1 memory:1 royal:3 power:6 subramanian:1 mn:2 normality:1 ebert:1 improve:1 imply:2 uptake:1 genomics:1 review:2 tangent:1 asymptotic:1 loss:2 mixed:1 limitation:1 filtering:2 validation:1 penalization:1 rapaport:1 degree:2 metal:1 nm3:1 article:1 metabolic:1 systematically:1 pi:4 course:1 penalized:5 genetics:1 understand:1 neighbor:1 wide:1 taking:2 van:2 boundary:20 dimension:8 curve:1 calculated:1 transition:3 unweighted:2 cumulative:1 genome:1 author:1 commonly:1 social:1 transaction:1 gene:17 christmas:1 global:1 reveals:1 dk0:1 xi:6 alternatively:3 spectrum:3 regulatory:3 table:5 ignoring:1 complex:4 european:1 rue:1 diag:1 significance:16 main:3 profile:1 allowed:2 repeated:1 amino:1 x1:7 siegel:1 attributable:1 precision:1 sub:1 space1:1 theorem:5 xt:1 emphasized:1 alt:3 incorporating:4 exists:3 false:2 ci:4 dissimilarity:1 michigan:2 rayleigh:2 manova:2 forming:1 penrose:1 problem2:1 corresponds:5 determines:1 shojaie:4 conditional:2 goal:4 marked:1 presentation:1 ann:2 considerable:1 change:2 included:1 specifically:2 determined:3 hyperplane:1 principal:14 lemma:3 called:2 total:5 geer:2 arbor:2 experimental:4 m3:1 formally:1 select:1 pomeroy:1 bioinformatics:3 incorporate:1 evaluate:1 argyriou:1 phenomenon:1
3,265
3,958
Sample complexity of testing the manifold hypothesis Hariharan Narayanan? Laboratory for Information and Decision Systems EECS, MIT Cambridge, MA 02139 [email protected] Sanjoy Mitter Laboratory for Information and Decision Systems EECS, MIT Cambridge, MA 02139 [email protected] Abstract The hypothesis that high dimensional data tends to lie in the vicinity of a low dimensional manifold is the basis of a collection of methodologies termed Manifold Learning. In this paper, we study statistical aspects of the question of fitting a manifold with a nearly optimal least squared error. Given upper bounds on the dimension, volume, and curvature, we show that Empirical Risk Minimization can produce a nearly optimal manifold using a number of random samples that is independent of the ambient dimension of the space in which data lie. We obtain an upper bound on the required number of samples that depends polynomially on the curvature, exponentially on the intrinsic dimension, and linearly on the intrinsic volume. For constant error, we prove a matching minimax lower bound on the sample complexity that shows that this dependence on intrinsic dimension, volume log 1 and curvature is unavoidable. Whether the known lower bound of O( k2 + 2 ? ) for the sample complexity of Empirical Risk minimization on k?means applied to data in a unit ball of arbitrary dimension is tight, has been an open question since 1997 [3]. Here  is the desired bound on the error and ? is a bound on the probability of failure. We improve  the4 best  currently  known upper bound [14] of 2 log 1 log k log 1 O( k2 + 2 ? ) to O k2 min k, 2  + 2 ? . Based on these results, we devise a simple algorithm for k?means and another that uses a family of convex programs to fit a piecewise linear curve of a specified length to high dimensional data, where the sample complexity is independent of the ambient dimension. 1 Introduction We are increasingly confronted with very high dimensional data in speech signals, images, geneexpression data, and other sources. Manifold Learning can be loosely defined to be a collection of methodologies that are motivated by the belief that this hypothesis (henceforth called the manifold hypothesis) is true. It includes a number of extensively used algorithms such as Locally Linear Embedding [17], ISOMAP [19], Laplacian Eigenmaps [4] and Hessian Eigenmaps [8]. The sample complexity of classification is known to be independent of the ambient dimension [15] under the manifold hypothesis, (assuming the decision boundary is a manifold as well,) thus obviating the curse of dimensionality. A recent empirical study [6] of a large number of 3 ? 3 images, represented as points in R9 revealed that they approximately lie on a two dimensional manifold known as the ? Research supported by grant CCF-0836720 1 Figure 1: Fitting a torus to data. Klein bottle. On the other hand, knowledge that the manifold hypothesis is false with regard to certain data would give us reason to exercise caution in applying algorithms from manifold learning and would provide an incentive for further study. It is thus of considerable interest to know whether given data lie in the vicinity of a low dimensional manifold. Our primary technical results are the following. 1. We obtain uniform bounds relating the empirical squared loss and the true squared loss over a class F consisting of manifolds whose dimensions, volumes and curvatures are bounded in Theorems 1 and 2. These bounds imply upper bounds on the sample complexity of Empirical Risk Minimization (ERM) that are independent of the ambient dimension, exponential in the intrinsic dimension, polynomial in the curvature and almost linear in the volume. 2. We obtain a minimax lower bound on the sample complexity of any rule for learning a manifold from F in Theorem 6 showing that for a fixed error, the the dependence of the sample complexity on intrinsic dimension, curvature and volume must be at least exponential, polynomial, and linear, respectively. 3. We improve the best currently known upper bound [14] on the sample complexity of Empirical Risk minimization on  k?means applied to data in a unit    ball of arbitrary dimen2 log 1 log4 k log 1 sion from O( k2 + 2 ? ) to O k2 min k, 2  + 2 ? . Whether the known lower log 1 bound of O( k2 + 2 ? ) is tight, has been an open question since 1997 [3]. Here  is the desired bound on the error and ? is a bound on the probability of failure. One technical contribution of this paper is the use of dimensionality reduction via random projections in the proof of Theorem 5 to bound the Fat-Shattering dimension of a function class, elements of which roughly correspond to the squared distance to a low dimensional manifold. The application of the probabilistic method involves a projection onto a low dimensional random subspace. This is then followed by arguments of a combinatorial nature involving the VC dimension of halfspaces, and the Sauer-Shelah Lemma applied with respect to the low dimensional subspace. While random projections have frequently been used in machine learning algorithms, for example in [2, 7], to our knowledge, they have not been used as a tool to bound the complexity of a function class. We illustrate the algorithmic utility of our uniform bound by devising an algorithm for k?means and a convex programming algorithm for fitting a piecewise linear curve of bounded length. For a fixed error threshold and length, the dependence on the ambient dimension is linear, which is optimal since this is the complexity of reading the input. 2 Connections and Related work In the context of curves, [10] proposed ?Principal Curves?, where it was suggested that a natural curve that may be fit to a probability distribution is one where every point on the curve is the center of mass of all those points to which it is the nearest point. A different definition of a principal curve was proposed by [12], where they attempted to find piecewise linear curves of bounded length which minimize the expected squared distance to a random point from a distribution. This paper studies the decay of the error rate as the number of samples tends to infinity, but does not analyze the dependence of the error rate on the ambient dimension and the bound on the length. We address this in a more general setup in Theorem 4, and obtain sample complexity bounds that are independent of 2 the ambient dimension, and depend linearly on the bound on the length. There is a significant amount of recent research aimed at understanding topological aspects of data, such its homology [20, 16]. log 1 It has been an open question since 1997 [3], whether the known lower bound of O( k2 + 2 ? ) for the sample complexity of Empirical Risk minimization on k?means applied to data in a unit ball of arbitrary dimension is tight. Here  is the desired bound on the error and ? is a bound on the 2 log 1 probability of failure. The best currently known upper bound is O( k2 + 2 ? ) and is based on      log4 k log 1 Rademacher complexities. We improve this bound to O k2 min k, 2  + 2 ? , using an argument that bounds the Fat-Shattering dimension of the appropriate function class using random projections and the Sauer-Shelah Lemma. Generalizations of principal curves to parameterized principal manifolds in certain regularized settings have been studied in [18]. There, the sample complexity was related to the decay of eigenvalues of a Mercer kernel associated with the regularizer. When the manifold to be fit is a set of k points (k?means), we obtain a bound on the sample complexity s that is independent of m and depends at most linearly on k, which also leads to an approximation algorithm with additive error, based on sub-sampling. If one allows a multiplicative error of 4 in addition to an additive error of , a statement of this nature has been proven by BenDavid (Theorem 7, [5]). 3 Upper bounds on the sample complexity of Empirical Risk Minimization In the remainder of the paper, C will always denote a universal constant which may differ across the paper. For any submanifold MR contained in, and probability distribution P supported on the unit ball B in Rm , let L(M, P) := d(M, x)2 dP(x). Given a set of i.i.d points x = {x1 , . . . , xs } from P, a tolerance  and a class of P manifolds F, Empirical Risk Minimization (ERM) outputs a s manifold in Merm (x) ? F such that i=1 d(xi , Merm )2 ? /2+inf N ?F d(xi , N )2 . Given error parameters , ?, and a rule A that outputs a manifold in F when provided with a set of samples, we define the sample complexity s = s(, ?, A) to be the least number such that for any probability distribution P in the unit ball, if the result of A applied to a set of at least s i.i.d random samples from P is N , then P [L(N , P) < inf M?F L(M, P) + ] > 1 ? ?. 3.1 Bounded intrinsic curvature Let M be a Riemannian manifold and let p ? M. Let ? be a geodesic starting at p. Definition 1. The first point on ? where ? ceases to minimize distance is called the cut point of p along M. The cut locus of p is the set of cut points of M. The injectivity radius is the minimum taken over all points of the distance between the point and its cut locus. M is complete if it is complete as a metric space. Let Gi = Gi (d, V, ?, ?) be the family of all isometrically embedded, complete Riemannian submanifolds of B having dimension less or equal to d, induced d?dimensional volume less or equal to V , sectional curvature less or equal to ? and injectivity radius greater or equal to ?. Let   d d Uint ( 1 , d, V, ?, ?) := V C min(,?,? , which for brevity, we denote Uint . ?1/2 ) Theorem 1. Let  and ? be error parameters. If        1 Uint Uint 1 1 4 s ? C min log , Uint + 2 log , 2  2  ? and x = {x1 , . . . , xs } is a set of i.i.d points from P then,   P L(Merm (x), P) ? inf L(M, P) <  > 1 ? ?. M?Gi The proof of this theorem is deferred to Section 4. 3.2 Bounded extrinsic curvature We will consider submanifolds of B that have the property that around each of them, for any radius r < ? , the boundary of the set of all points within a distance r is smooth. This class of submanifolds 3 has appeared in the context of manifold learning [16, 15]. The condition number is defined as follows. Definition 2 (Condition Number). Let M be a smooth d?dimensional submanifold of Rm . We define the condition number c(M) to be ?1 , where ? is the largest number to have the property that for any r < ? no two normals of length r that are incident on M have a common point unless it is on M. Let Ge = Ge (d, V, ? ) be the family of Riemannian submanifolds of B having dimension ? d, volume ? V and condition number ? ?1 . Let  and ? be error parameters. Let Uext ( 1 , d, ? ) := d   d , which for brevity, we denote by Uext . V C min(,? ) Theorem 2. If        1 Uext 1 1 Uext 4 s ? C min log + log , U , ext 2  2 2 ? and x = {x1 , . . . , xs } is a set of i.i.d points from P then, h i P L(Merm (x), P) ? inf L(M, P) <  > 1 ? ?. (1) M 4 Relating bounded curvature to covering number In this subsection, we note that that bounds on the dimension, volume, sectional curvature and injectivity radius suffice to ensure that they can be covered by relatively few ?balls. Let VpM be the volume of a ball of radius M centered around a point p. See ([9], page 51) for a proof of the following theorem. Theorem 3 (Bishop-G?unther Inequality). Let M be a complete Riemannian manifold and assume that r is not greater than the injectivity radius ?. Let K M denote the sectional curvature of M and  ? n?1 n R r ?) ? let ? > 0 be a constant. Then, K M ? ? implies VpM (r) ? ?2?n2 0 sin(t dt. ? (2) ?1 Thus, if  < min(?, ??2 2 ), then, VpM () >   d Cd . d Proof of Theorem 1. As a consequence of Theorem 3, we obtain an upper bound of V Cd on  the number of disjoint sets of the form M ? B/32 (p) that can be packed in M. If {M ? B/32 (p1 ), . . . , M?B/32 (pk )} is a maximal family of disjoint sets of the form M?B/32 (p), then there is no point p ? M such that min kp ? pi k > /16. Therefore, M is contained in the union of i  d  S 1 Cd balls, B/16 (pi ). Therefore, we may apply Theorem 4 with U  ? V . ?1 min(,? i 2 ,?) The proof of Theorem 2 is along the lines of that of Theorem 1, so it has been deferred to the journal version. 5 Class of manifolds with a bounded covering number In this section, we show that uniform bounds relating the empirical squares loss and the expected squared loss can be obtained for a class of manifolds whose covering number at a different scale  has a specified upper bound. Let U : R+ ? Z+ be any integer valued function. Let G be any family of subsets of B such that for all r > 0 every element M ? G can be covered using open Euclidean balls of radius r centered around U ( 1r ) points; let this set be ?M (r). Note that if the subsets consist of k?tuples of points, U (1/r) can be taken to be the constant function equal to k and we recover the k?means question. A priori, it is unclear if Ps i=1 d(xi , M)2 2 sup ? EP d(x, M) , (2) s M?G 4 is a random variable, since the supremum of a set of random variables is not always a random variable (although if the set is countable this is true). However (2) is equal to Ps d(xi , ?M (1/n))2 lim sup i=1 (3) ? EP d(x, ?M (1/n))2 , n?? M?G s and for each n, the supremum in the limits is over a set parameterized by U (n) points, which without loss of generality we may take to be countable (due to the density and countability of rational points). Thus, for a fixed n, the quantity in the limits is a random variable. Since the limit as n ? ? of a sequence of bounded random variables is a random variable as well, (2) is a random variable too. Theorem 4. Let  and ? be error parameters. If        U (16/) 1 U (16/) 1 1 4 s?C min U (16/), 2 log + 2 log , 2    ? Then, Ps  i=1 d(xi , M)2  2 P sup ? EP d(x, M) < > 1 ? ?. s 2 M?G  (4) Proof. For every g ? G, let c(g, ) = {c1 , . . . , ck } be a set of k := U (16/) points in g ? B, such that g is covered by the union of balls of radius /16 centered at these points. Thus, for any point x ? B,   2 d2 (x, g) ? + d(x, c(g, )) (5) 16 2  mini kx ? ci k ? + + d(x, c(g, ))2 . (6) 256 8 Since mini kx ? ci k is less or equal to 2, the last expression is less than 2 + d(x, c(g, ))2 . Our proof uses the ?kernel trick? in conjunction with Theorem 5. Let ? : (x1 , . . . , xm )T 7? 2?1/2 (x1 , . . . , xm , 1)T map a point x ? Rm to one in Rm+1 . For each i, let ci := (ci1 , . . . , cim )T , 2 and c?i := 2?1/2 (?ci1 , . . . , ?cim , kc2i k )T . The factor of 2?1/2 is necessitated by the fact that we wish the image of a point in the unit ball to also belong to the unit ball. Given a collection of points c := {c1 , . . . , ck } and a point x ? B, let fc (x) := d(x, c(g, ))2 . Then, fc (x) = kxk2 + 4 min(?(x) ? c?1 , . . . , ?(x) ? c?k ). For any set of s samples x1 , . . . , xs , Ps i=1 fc (xi ) sup ? EP fc (x) ? s fc ?G + Ps i=1 kxi k2 2 ? EP kxk (7) s Ps min ?(xi ) ? c?i i=1 i 4 sup ? EP min ?(x) ? c?i . (8) i s fc ?G By Hoeffding?s inequality,  Ps  i=1 kxi k2 2 1  2 P ? EP kxk > < 2e?( 8 )s , s 4 which is less than 2? . " Ps ?(xi )?? ci i=1 min i By Theorem 5, P sup ? E min ?(x) ? c ? P i > s i f ?G c " # P s fc (xi )  Therefore, P sup i=1s ? EP fc (x) ? 2 ? 1 ? ?. f ?G c 5 #  16 < 2? . (9) ? Rx2 x2 Random x1 map R ? 2 Rx1 Rx4 x4 Rx3 x3 Figure 2: Random projections are likely to preserve linear separations. 6 Bounding the Fat-Shattering dimension using random projections The core of the uniform bounds in Theorems 1 and 2 is the following uniform bound on the minimum of k linear functions on a ball in Rm . Theorem 5. Let F be the set of all functions f from B := {x ? Rm : kxk ? 1} to R, such that for some k vectors v1 , . . . , vk ? B, f (x) := min(vi ? x). i Independent of m, if  s?C k min 2  1 log4 2     k 1 1 , k + 2 log ,   ? then P # s F (x ) i=1 i ? EP F (x) <  > 1 ? ?. P sup s F ?F " (10)  It has been open since 1997 [3], whether the known lower bound of C k2 + 12 log 1? on the sample complexity s is tight. Theorem 5 in [14], uses Rademacher complexities to obtain an upper bound of   2 1 1 k + log . (11) C 2 2 ? (The scenarios in [3, 14] are that of k?means, but the argument in Theorem 4 reduces k?means to our setting.) Theorem 5 improves this to       1 k 1 k 1 4 min 2 log C , k + 2 log (12) 2    ? by putting together (11) with a bound of     k k 1 1 4 C log + 2 log (13) 4   ? obtained using the Fat-Shattering dimension. Due to constraints on space, the details of the proof of Theorem 5 will appear in the journal version, but the essential ideas are summarized here.  Let u := fatF ( 24 ) and x1 , . . . , xu be a set of vectors that is ??shattered by F . We would like to use VC theory to bound u, but doing so directly leads to a linear dependence on the ambient dimension m. In order to circumvent this difficulty, for g := C log(u+k) , we consider a g?dimensional random 2 linear subspace and the image under an appropriately scaled orthogonal projection R of the points x1 , . . . , xu onto it. We show that the expected value of the ?2 ?shatter coefficient of {Rx1 , . . . , Rxu } is at least 2u?1 using the Johnson-Lindenstrauss Lemma [11] and the fact that {x1 , . . . , xu } is ??shattered. Using Vapnik-Chervonenkis theory and the Sauer-Shelah Lemma, we then show that ? uk(g+2) . This implies that 2u?1 ? uk(g+2) , allowing us 2 ?shatter coefficient cannot be more than   2 k 2 k  Ck to conclude that fatF ( 24 ) ? 2 log  . By a well-known theorem of [1], a bound of Ck 2 log   on fatF ( 24 ) implies the bound in (13) on the sample complexity, which implies Theorem 5. 6 7 Minimax lower bounds on the sample complexity Let K be a universal constant whose value will be fixed throughout this section. In this section, we will state lower bounds on the number of samples needed for the minimax decision rule for learning from high dimensional data, with high probability, a manifold with a squared loss that is within  of the optimal. We will construct a carefully chosen prior on the space of probability distributions and use an argument that can either be viewed as an application of the probabilistic method or of the fact that the Minimax risk is at least the risk of a Bayes optimal manifold computed with respect to this prior. Let U be a K 2d k?dimensional vector space containing the origin, spanned by the basis {e1 , . . . , eK 2d k } and S be the surface of the ball of radius 1 in Rm . We assume that m be greater or equal to K 2d k +d. Let W be the d?dimensional vector space spanned ? by {eK 2d k+1 , . . . , eK 2d k+d }. Let S1 , . . . , SK 2d k denote spheres, such that for each i, Si := S ? ( 1 ? ? 2 ei + W ), where x + W 2d  k is the translation of W by x. Note that each Si has radius ? . Let ` = K K d k and {M1 , . . . , M` } consist of all K d k?element subsets of {S1 , . . . , SK 2d k }. Let ?d be the volume of the unit ball in Rd . The following theorem shows that no algorithm can produce a nearly optimal manifold with high probability unless it uses a number of samples that depends linearly on volume, exponentially on intrinsic dimension and polynomially on the curvature. Theorem 6. Let F be equal to either Ge (d, V, ? ) or Gi (d, V, ?12 , ?? ). Let k = b V 5 d?d (K 4 ? )d c. Let A be an arbitrary algorithm that takes as input a set of data points x = {x1 , . . . , xk } and outputs a  2 1 manifold MA (x) in F. If  + 2? < 13 2? ? ? then, 2   inf P L(MA (x), P) ? inf L(M, P) <  < 1 ? ?, P M?F where P ranges over all distributions supported on B and x1 , . . . , xk are i.i.d draws from P. Proof. Observe from Lemma ?? and Theorem 3 that F is a class of a manifolds such that 3d each manifold in F is contained in the union of K 2 k m?dimensional balls of radius ? , and 3d 5d {M1 , . . . , M` } ? F. (The reason why we have K 2 rather than K 4 as in the statement of the theorem is that the parameters of Gi (d, V, ? ) are intrinsic, and to transfer to the extrinsic setting of the last sentence, one needs some leeway.) Let P1 , . . . , P` be probability distributions that are uniform on {M1 , . . . , M` } with respect to the induced Riemannian measure. Suppose A is an algorithm that takes as input a set of data points x = {x1 , . . . , xt } and outputs a manifold MA (x). Let r be chosen uniformly at random from {1, . . . , `}. Then,     inf P L(MA (x), P) ? inf L(M, P) <  ? EPr Px L(MA (x), Pr ) ? inf L(M, Pr ) <  P M?F M?F   = Ex PPr L(MA (x), Pr ) ? inf L(M, Pr ) <  x M?F   = Ex PPr L(MA (x), Pr ) <  x . We first prove a lower bound on inf x Er [L(MA (x), Pr )|x]. We see that   Er L(MA (x), Pr ) x =   Er,xk+1 d(MA (x), xk+1 )2 x . (14) Conditioned on x, the probability of the event (say Edif ) that xk+1 does not belong to the same sphere as one of the x1 , . . . , xk is at least 21 . Conditioned on Edif and x1 , . . . , xk , the probability that xk+1 lies on a given sphere Sj is equal to 1 0 0 if one of x1 , . . . , xk lies on Sj and K 2 k?k 0 otherwise, where k ? k is the number of spheres in {Si } which contain at least one point among x1 , . . . , xk . By construction, A(x1 , . . . , xk ) can be covered by K y1 , . . . , y 3d . 2 K k 7 3d 2 k balls of radius ? ; let their centers be However, it is easy to check that for any dimension m, the cardinality of the set Sy of all Si that 1 have a nonempty intersection with the balls of radius 2? centered around y1 , . . . , y 3d , is at most K 2 k i2 h 3d 1 K 2 k. Therefore, P d(MA (x), xk+1 ) ? 2?2 ? ? x is at least   1 P d({y1 , . . . , y 3d ? P [Edif ] P [xk+1 6? Sy |Edif ] }, xk+1 ) ? ? x K 2 k 2 2 1 K 2d k ? k 0 ? K 2 K 2d k ? k 0 ? 3d 2 k ? 13 .  2   1 Therefore, Er,xk+1 d(MA (x), xk+1 )2 x ? 13 2? ? ? . Finally, we observe that it is not pos2     sible for Ex PPr L(MA (x), Pr ) <  x to be more than 1 ? ? if inf x PPr L(MA (x), Pr ) x >  + 2?, because L(MA (x), Pr ) is bounded above by 2. 8 8.1 Algorithmic implications k?means Applying Theorem 4 to the case when P is a distribution supported equally on n specific points (that are part of an input) in a unit ball of Rm , we see that in order to obtain an additive  approximation for the k?means problem with probability 1 ? ?, it suffices to sample !  ! 4 k log k 1 1 s?C , k + 2 log 2 2  ? points uniformly at random (which would have a cost of O(s log n) if the cost of one random bit is O(1)) and exhaustively solve k?means on the resulting subset. Supposing that a dot product between two vectors xi , xj can be computed using m ? operations, the total cost of sampling and then exhaustively solving k?means on the sample is O(msk ? s log n). In contrast, if one asks for a multiplicative (1 + ) approximation, the best running time known depends linearly on n [13]. If P is an unknown probability distribution, the above algorithm improves upon the best results in a natural statistical framework for clustering [5]. 8.2 Fitting piecewise linear curves In this subsection, we illustrate the algorithmic utility of the uniform bound in Theorem 4 by obtaining an algorithm for fitting a curve of length no more than L, to data drawn from an unknown probability distribution P supported in B, whose sample complexity is independent of the ambient dimension. This curve, with probability 1 ? ?, achieves a mean squared error of less than  more than the optimum. The proof of its correctness and analysis of its run-time have been deferred to the journal version. The algorithm is as follows: 1. Let k := d L e  and s ? C k 2  log4 ( k ) ,k 2  + 1 2 log 1 ?  . Sample points x1 , . . . , xs i.i.d from P for s =, and set J := span({xi }si=1 ). Pn 2. For every permutation ? of [s], minimize the convex objective function i=1 d(x?(i) , yi )2 Ps?1 over the convex set of all s?tuples of points (y1 , . . . , ys ) in J, such that i=1 kyi+1 ? yi k ? L. 3. If the minimum over all (y1 , . . . , ys ) (and ?) is achieved for (z1 , . . . , zs ), output the curve obtained by joining zi to zi+1 for each i by a straight line segment. 9 Acknowledgements We are grateful to Stephen Boyd for several helpful conversations. 8 References [1] Noga Alon, Shai Ben-David, Nicol`o Cesa-Bianchi, and David Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. J. ACM, 44(4):615?631, 1997. [2] Rosa I. Arriaga and Santosh Vempala. An algorithmic theory of learning: Robust concepts and random projection. In FOCS, pages 616?623, 1999. [3] Peter Bartlett. The minimax distortion redundancy in empirical quantizer design. IEEE Transactions on Information Theory, 44:1802?1813, 1997. [4] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput., 15(6):1373?1396, 2003. [5] Shai Ben-David. A framework for statistical clustering with constant time approximation algorithms for k-median and k-means clustering. Mach. Learn., 66(2-3):243?257, 2007. [6] Gunnar Carlsson. Topology and data. Bulletin of the American Mathematical Society, 46:255? 308, January 2009. [7] Sanjoy Dasgupta. Learning mixtures of gaussians. In FOCS, pages 634?644, 1999. [8] David L. Donoho and Carrie Grimes. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proceedings of the National Academy of Sciences, 100(10):5591?5596, May 2003. [9] A. Gray. Tubes. Addison-Wesley, 1990. [10] Trevor J. Hastie and Werner Stuetzle. Principal curves. Journal of the American Statistical Association, 84:502?516, 1989. [11] William Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary Mathematics, 26:419?441, 1984. [12] Bal?azs K?egl, Adam Krzyzak, Tam?as Linder, and Kenneth Zeger. Learning and design of principal curves. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22:281?297, 2000. [13] Amit Kumar, Yogish Sabharwal, and Sandeep Sen. A simple linear time (1+)?approximation algorithm for k-means clustering in any dimensions. In FOCS, pages 454?462, 2004. [14] Andreas Maurer and Massimiliano Pontil. Generalization bounds for k-dimensional coding schemes in hilbert spaces. In ALT, pages 79?91, 2008. [15] H. Narayanan and P. Niyogi. On the sample complexity of learning smooth cuts on a manifold. In Proc. of the 22nd Annual Conference on Learning Theory (COLT), June 2009. [16] Partha Niyogi, Stephen Smale, and Shmuel Weinberger. Finding the homology of submanifolds with high confidence from random samples. Discrete & Computational Geometry, 39(13):419?441, 2008. [17] Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embedding. SCIENCE, 290:2323?2326, 2000. [18] Alexander J. Smola, Sebastian Mika, Bernhard Sch?olkopf, and Robert C. Williamson. Regularized principal manifolds. J. Mach. Learn. Res., 1:179?209, 2001. [19] J. B. Tenenbaum, V. Silva, and J. C. Langford. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science, 290(5500):2319?2323, 2000. [20] Afra Zomorodian and Gunnar Carlsson. Computing persistent homology. Discrete & Computational Geometry, 33(2):249?274, 2005. 9
3958 |@word version:3 polynomial:2 nd:1 open:5 d2:1 asks:1 reduction:4 chervonenkis:1 si:5 must:1 additive:3 zeger:1 intelligence:1 devising:1 xk:16 core:1 quantizer:1 mathematical:1 along:2 shatter:2 persistent:1 focs:3 prove:2 fitting:5 expected:3 roughly:1 p1:2 frequently:1 curse:1 cardinality:1 provided:1 bounded:9 suffice:1 mass:1 submanifolds:5 z:1 caution:1 finding:1 every:4 isometrically:1 fat:4 k2:9 rm:8 scaled:1 uk:2 unit:9 grant:1 appear:1 tends:2 limit:3 consequence:1 ext:1 mach:2 joining:1 approximately:1 mika:1 studied:1 range:1 testing:1 union:3 epr:1 x3:1 stuetzle:1 pontil:1 universal:2 empirical:11 matching:1 projection:8 boyd:1 confidence:1 onto:2 cannot:1 risk:9 applying:2 context:2 leeway:1 map:2 center:2 starting:1 convex:4 rule:3 haussler:1 spanned:2 embedding:3 construction:1 suppose:1 carrie:1 programming:1 us:4 hypothesis:6 origin:1 trick:1 element:3 rx1:2 cut:5 ep:9 contemporary:1 halfspaces:1 complexity:24 exhaustively:2 geodesic:1 depend:1 tight:4 solving:1 segment:1 grateful:1 upon:1 basis:2 represented:1 regularizer:1 massimiliano:1 kp:1 whose:4 valued:1 solve:1 say:1 distortion:1 otherwise:1 niyogi:3 gi:5 confronted:1 sequence:1 eigenvalue:1 sen:1 maximal:1 product:1 remainder:1 rx2:1 roweis:1 academy:1 az:1 olkopf:1 convergence:1 p:9 optimum:1 rademacher:2 produce:2 adam:1 geneexpression:1 ben:2 illustrate:2 alon:1 nearest:1 involves:1 implies:4 msk:1 differ:1 sabharwal:1 radius:13 vc:2 centered:4 suffices:1 generalization:2 ci1:2 extension:1 around:4 normal:1 lawrence:1 algorithmic:4 mapping:1 achieves:1 proc:1 combinatorial:1 currently:3 sensitive:1 largest:1 correctness:1 tool:1 minimization:7 mit:4 always:2 ck:4 rather:1 pn:1 sion:1 conjunction:1 june:1 vk:1 check:1 contrast:1 helpful:1 shattered:2 classification:1 among:1 colt:1 priori:1 equal:10 construct:1 rosa:1 having:2 santosh:1 sampling:2 shattering:4 x4:1 nearly:3 uint:5 piecewise:4 few:1 belkin:1 preserve:1 national:1 geometry:2 consisting:1 william:1 interest:1 zomorodian:1 deferred:3 mixture:1 grime:1 har:1 implication:1 ambient:9 sauer:3 necessitated:1 unless:2 orthogonal:1 loosely:1 euclidean:1 maurer:1 desired:3 re:1 werner:1 cost:3 subset:4 uniform:8 submanifold:2 eigenmaps:4 johnson:2 too:1 learnability:1 eec:2 kxi:2 density:1 probabilistic:2 rxu:1 together:1 squared:8 tube:1 unavoidable:1 containing:1 cesa:1 hoeffding:1 henceforth:1 tam:1 ek:3 american:2 sible:1 summarized:1 coding:1 includes:1 coefficient:2 countability:1 depends:4 vi:1 multiplicative:2 analyze:1 sup:8 doing:1 recover:1 bayes:1 shai:2 contribution:1 minimize:3 square:1 hariharan:1 partha:2 sy:2 correspond:1 straight:1 fatf:3 trevor:1 sebastian:1 definition:3 failure:3 proof:10 associated:1 riemannian:5 rational:1 knowledge:2 subsection:2 dimensionality:5 lim:1 improves:2 conversation:1 hilbert:2 carefully:1 wesley:1 dt:1 methodology:2 generality:1 smola:1 langford:1 hand:1 ei:1 nonlinear:2 gray:1 contain:1 true:3 homology:3 ccf:1 concept:1 vicinity:2 isomap:1 laboratory:2 i2:1 sin:1 vpm:3 covering:3 bal:1 complete:4 silva:1 image:4 common:1 exponentially:2 volume:12 belong:2 association:1 m1:3 relating:3 significant:1 cambridge:2 rd:1 mathematics:1 dot:1 surface:1 curvature:13 recent:2 inf:12 termed:1 scenario:1 certain:2 inequality:2 yi:2 devise:1 injectivity:4 minimum:3 greater:3 mr:1 signal:1 stephen:2 reduces:1 smooth:3 technical:2 sphere:4 e1:1 equally:1 y:2 laplacian:2 shelah:3 involving:1 metric:1 kernel:2 achieved:1 c1:2 addition:1 median:1 source:1 appropriately:1 noga:1 sch:1 induced:2 supposing:1 integer:1 revealed:1 easy:1 xj:1 fit:3 zi:2 hastie:1 topology:1 andreas:1 idea:1 whether:5 motivated:1 expression:1 sandeep:1 utility:2 unther:1 bartlett:1 krzyzak:1 peter:1 speech:1 hessian:2 covered:4 aimed:1 amount:1 extensively:1 locally:3 narayanan:2 tenenbaum:1 extrinsic:2 disjoint:2 klein:1 discrete:2 dasgupta:1 incentive:1 putting:1 redundancy:1 gunnar:2 threshold:1 drawn:1 kyi:1 kenneth:1 v1:1 run:1 parameterized:2 family:5 almost:1 throughout:1 separation:1 draw:1 decision:4 bit:1 bound:46 followed:1 topological:1 annual:1 infinity:1 constraint:1 x2:1 aspect:2 argument:4 min:19 span:1 kumar:1 vempala:1 relatively:1 px:1 ball:19 across:1 increasingly:1 sam:1 s1:2 bendavid:1 pr:10 erm:2 taken:2 nonempty:1 needed:1 know:1 locus:2 ge:3 addison:1 operation:1 gaussians:1 apply:1 observe:2 appropriate:1 uext:4 weinberger:1 running:1 ensure:1 clustering:4 amit:1 society:1 objective:1 question:5 quantity:1 primary:1 dependence:5 unclear:1 dp:1 subspace:3 distance:5 manifold:35 reason:2 assuming:1 length:8 mini:2 setup:1 robert:1 statement:2 smale:1 design:2 countable:2 packed:1 unknown:2 allowing:1 upper:10 bianchi:1 january:1 y1:5 arbitrary:4 david:4 bottle:1 required:1 specified:2 connection:1 sentence:1 z1:1 address:1 suggested:1 pattern:1 xm:2 rx4:1 appeared:1 reading:1 program:1 belief:1 event:1 natural:2 difficulty:1 regularized:2 circumvent:1 minimax:6 scheme:1 improve:3 imply:1 cim:2 ppr:4 prior:2 understanding:1 acknowledgement:1 carlsson:2 geometric:1 nicol:1 embedded:1 loss:6 permutation:1 proven:1 incident:1 mercer:1 pi:2 cd:3 translation:1 supported:5 last:2 saul:1 bulletin:1 mikhail:1 tolerance:1 regard:1 curve:15 dimension:29 boundary:2 lindenstrauss:2 collection:3 polynomially:2 transaction:2 sj:2 bernhard:1 supremum:2 global:1 conclude:1 tuples:2 xi:11 sk:2 why:1 nature:2 transfer:1 robust:1 learn:2 shmuel:1 obtaining:1 williamson:1 pk:1 linearly:5 bounding:1 n2:1 obviating:1 x1:19 xu:3 mitter:2 sub:1 wish:1 torus:1 exponential:2 exercise:1 lie:6 kxk2:1 comput:1 theorem:32 bishop:1 xt:1 specific:1 showing:1 er:4 decay:2 x:5 cease:1 alt:1 intrinsic:8 consist:2 essential:1 false:1 vapnik:1 ci:4 conditioned:2 egl:1 kx:2 arriaga:1 intersection:1 fc:8 likely:1 sectional:3 kxk:3 contained:3 acm:1 ma:17 viewed:1 donoho:1 lipschitz:1 considerable:1 uniformly:2 lemma:5 principal:7 called:2 sanjoy:2 r9:1 total:1 attempted:1 linder:1 log4:4 brevity:2 alexander:1 ex:3
3,266
3,959
Deterministic Single-Pass Algorithm for LDA Issei Sato University of Tokyo, Japan [email protected] Kenichi Kurihara Google [email protected] Hiroshi Nakagawa University of Tokyo, Japan [email protected] Abstract We develop a deterministic single-pass algorithm for latent Dirichlet allocation (LDA) in order to process received documents one at a time and then discard them in an excess text stream. Our algorithm does not need to store old statistics for all data. The proposed algorithm is much faster than a batch algorithm and is comparable to the batch algorithm in terms of perplexity in experiments. 1 Introduction Huge quantities of text data such as news articles and blog posts arrives in a continuous stream. Online learning has attracted a great deal of attention as a useful method for handling this growing quantity of streaming data because it processes data one at a time, whereas batch algorithms are not feasible in these settings because they need all the data at the same time. This paper focus on online learning for Latent Dirichlet allocation (LDA) (Blei et al., 2003), which is a widely used probabilistic model for text data. Running me short sREM-LDA iREM-LDA CVB-LDA long VB-LDA large small Memory usage Online learning for LDA has been already developed (Banerjee and Basu, 2007; Alsumait et al., 2008; Canini et al., 2009; Yao et al., 2009). Existing studies were based on sampling methods such as the incremental Gibbs sampler and particle filter. Sampling methods seem to be inappropriate for streaming data because sampling methods have to represent a pos- Figure 1: Overview of the relationterior by using a lot of samples, which basically needs much ships among inferences. time. Moreover, sampling algorithms often need a resampling step in which a sampling method is applied to old data. Storing old data or old samples adversely affects the good properties of online algorithms. Particle filters also need to run m parallel processing. A parallel algorithm needs more memory than a single-process algorithm, which is not useful for a large quantity of data, especially in the case of a large vocabulary. For example, LDA needs to store the number of words observed in a topic. If the number of topics is T , the vocabulary size is V and m, so the required memory size is O(m ? T ? V ). We propose two deterministic online algorithms; an incremental algorithms and a single-pass algorithm. Our incremental algorithm is an incremental variant of the reverse EM (REM) algorithm (Minka, 2001). The incremental algorithm updates parameters by replacing old sufficient statistics with new one for each datum. Our single-pass algorithm is based on an incremental algorithm, but it does not need to store old statistics for all data. In our single-pass algorithm, we propose a sequential update method for the Dirichlet parameters. Asuncion et al. (2009); Wallach et al. (2009) indicated the importance of estimating the parameters of the Dirichlet distribution, which is the distribution over the topic distributions of documents. Moreover, we can deal with the growing vocabulary size. In real life, the total vocabulary size is unknown, i.e., increasing as a document is observed. 1 In summary, Fig.1 shows the relationships among inferences. VB-LDA is the variational inference for LDA, which is a batch inference; CVB-LDA is the collapsed variational inference for LDA (Teh et al., 2007); iREM-LDA is our incremental algorithm; and sREM-LDA is our single-pass algorithm for LDA. Sections.2 briefly explains inference algorithms for LDA. Section 3 describes the proposed algorithm for online learning. Section 4 presents the experimental results. 2 Overview of Latent Dirichlet Allocation This section overviews LDA where documents are represented as random mixtures over latent topics and each topic is characterized by a distribution over words. First, we will define the notations, and then, describe the formulation of LDA. T is the number of topics. M is the number of documents. V is the vocabulary size. Nj is the number of words in document j. wj,i denotes the i-th word in document j. zj,i denotes the latent topic of word wj,i . M ulti(?) is a multinomial distribution. Dir(?) is a Dirichlet distribution. ? j denotes a T -dimensional probability vector that is the parameters of the multinomial distribution, and represents the topic distribution of document j. ? t is a multinomial parameter a V -dimensional probability where ?t,v specifies the probability of generating word v given topic t. ? is the T -dimensional parameter vector of the Dirichlet distribution over ?j (j = 1, ? ? ? , M ). LDA assumes the following generative process. For each of the T topics t, draw ? t ? Dir(?|?) ? ? ? ??1 ?t ?1 ?t . v ?t,v . For each of the M documents j, draw ? j ? Dir(?|?) where Dir(?|?) ? t For each of the Nj words wj,i in document j, draw topic zj,i ? M ulti(z|? j ) and draw word wj,i ? p(w|zj,i , ?) where p(w = v|z = t, ?) = ?t,v . That is to say, the complete-data likelihood of a document wj is given by p(wj , z j , ? j |?, ?) = p(? j |?) Nj ? p(wj,i |zj,i , ?)p(z j |? j ). (1) i 2.1 Variational Bayes Inference for LDA The VB inference for LDA(Blei et al., 2003) introduces a factorized variational posterior q(z, ?, ?) over z = {zj,i }, ? = {? j } and ? = {? t } given by ? ? ? q(z, ?, ?) = q(zj,i |?j,i ) q(? j |? j ) q(? t |?t ), (2) j,i j t where ? and ? are variational parameters, ?j,i,t specifies the probability that the topic of word wj,k is topic t, and ? j and ?t are the parameters of the Dirichlet distributions over ? j and ? t , respectively, ? ? ?1 ? ? ?1 i.e., q(? j |? j ) ? ?j,tj,t and q(? t |?t ) ? ?t,vt,v . t v The log-likelihood of documents is lower bounded introducing q(z, ?) by ? ? ? ? j p(w j , z j , ? j |?, ?) t p(? t |?) d? j d?. q(z, ?, ?) log F[q(z, ?, ?)] = q(z, ?, ?) z (3) The parameters are updated as Nj ? ? exp ?(?t,wj,i ) ? exp ?(?j,t )), ?j,t = ?t + ?j,i,t , ?t,v = ? + nj,t,v , exp ?( v ?t,v ) i=1 j ? = i ?j,i,t I(wj,i = v) and I(?) is an indicator function. ?j,i,t ? where nj,t,v (4) We can estimate ? with the fixed point iteration (Minka, 2000; Asuncion et al., 2009) by introducing the gamma prior G(?t |a0 , b0 ), i.e., ?t ? G(?t |a0 , b0 )(t = 1, ..., T ), as ? a0 ? 1 + j {?(?told + nj,t ) ? ?(?told )}?told new ? ?t = , (5) b0 + j (?(Nj + ?0old ) ? ?(?0old )) 2 Algorithm 1 VB inference for LDA Algorithm 2 CVB inference for LDA 1: for iteration it = 1, ? ? ? , L do 2: for j = 1, ? ? ? , M do 3: for i = 1, ? ? ? , Nj do 4: Update ?j,i,t (t = 1, ? ? ? , T ) by 5: 6: 7: 8: 9: 10: 1: for iteration it = 1, ? ? ? , L do 2: for j = 1, ? ? ? , M do 3: for i = 1, ? ? ? , Nj do 4: Update ?j,i,t by Eq. (7) 5: Update nj,t replacing ?old j,i,t with Eq. (4) end for Update ?j,t (t = 1, ? ? ? , T ) by Eq. (4) end for Update ? by Eq. (4) Update ? by Eq. (5) end for where ?0 = ? t 6: 7: 8: 9: 10: ?new j,i,t . Update nt,wj,i replacing ?old j,i,t with ?new j,i,t . end for end for Update ? by Eq. (5) end for ?t , and a0 and b0 are the parameters for the gamma distribution. Algorithm 1 has the VB inference scheme of LDA. 2.2 Collapsed Variational Bayes Inference for LDA Teh et al. (2007) proposed CVB-LDA inspired by collapsed Gibbs sampling and found that the convergence of CVB-LDA is experimentally faster than that of VB-LDA, and CVB-LDA outperformed VB-LDA in terms of perplexity. The CVB-LDA only introduced a variational posterior q(z) where it marginalized out ? and ? over the priors. The CVB inference optimizes the following lower bound given by FCV B [q(z)] = M ? ? q(z) log j=1 z p(wj , z j |?, ?) . q(z) (6) The derivation of the update equation for q(z) is slightly complicated and involves approximations to compute intractable summations. Although Teh et al. (2007) made use of a second-order Taylor expansion as an approximation, Asuncion et al. (2009) shows the usefulness of an approximation using only zero-order information. An update using only zero-order information is given by j ? ? ? + n?j,i t,wj,i ?j,i ? ?j,i,t , nt,v = ?j,i,t I(wj,i = v), ? ?j,i (?t + nj,t ), nj,t = V ? + v nt,v i=1 j,i N ?j,i,t (7) where ?-j,i? denotes subtracting ?j,i,t . Algorithm 2 provides the CVB inference scheme for LDA. 3 Deterministic Online Algorithm for LDA The purpose of this study is to process text data such as news articles and blog posts arriving in a continuous stream by using LDA. We propose a learning algorithm for LDA that can be applied to these semi-infinite and time-series text streams. For these situations, we want to process text one at a time and then discard them. We repeat iterations only for each word within a document. That is, we update parameters from an arriving document and discard the document after doing l iterations. Therefore, we do not need to store statistics about discarded documents. First, we derived an incremental algorithm for LDA, and then we extended the incremental algorithm to a single-pass algorithm. 3.1 Incremental Learning (Neal and Hinton, 1998) provided a framework of incremental learning for the EM algorithm. In general unsupervised-learning, we estimate sufficient statistics si for each data i, compute whole 3 ? sufficient statistics ?(= i si ) from all data, and update parameters by using ?. In incremental learning, for each data i, we estimate si , compute ? (i) from si , and update parameters from ? (i) . It is easy to extend an existing batch algorithm to the incremental learning if whole sufficient statistics or parameters updates are constructed by simply summarizing all data statistics. The incremental algorithm processes data i by subtracting old sold and adding new snew , i.e., ? (i) = ?sold + snew . i i i i old The incremental algorithm needs to store old statistics {si } for all data. While batch algorithms update parameters sweeping through all data, the incremental algorithm updates parameters for each data one at a time, which results in more parameter updates than batch algorithms. Therefore, the incremental algorithm sometimes converge faster than batch algorithms. 3.2 Incremental Learning for LDA Our motivation for devising the incremental algorithm for LDA was to compare CVB-LDA and VB-LDA. Statistics {nt,v } and {nj,t } are updated after each word is updated in CVB-LDA. This update schedule is similar to that of the incremental algorithm. This incremental property seems to be the reason CVB-LDA converges faster than VB-LDA. Moreover, since CVB-LDA optimizes a tighterlower-bound from VB-LDA, CVB-LDA can find better optima. Below, let us consider the incremental algorithm for LDA. We start by optimizing the lower-bound different form VB-LDA by using the reverse EM (REM) algorithm (Minka, 2001) as follows: ? ? ? ? Nj T Nj T V ?? ? I(wj,i =v) p(wj |?, ?) = (?j,t ?t,v ) p(?j |?)d?j = (?j,t ?t,wj,i )p(?j |?)d?j , i=1 t=1 v=1 i=1 t=1 (8) ? ? ? Nj T ( ? ?j,t ?t,wj,i )?j,i,t i=1 t=1 = ?j,i,t Nj T ( T ? ? ?t,wj,i )?j,i,t ? ? i=1 t=1 ?j,i,t p(?j |?)d?j , ? ?j,ti ?j,i,t (9) p(?j |?)d?j . (10) t=1 ? ? (x) ? Equation (9) is derived from Jensen?s inequality as follows. log x f (x) = log x q(x) fq(x) ? ? ? ? ? f (x) f (x) q(x) f (x) q(x) where x q(x) = 1, and so x f (x) ? x ( q(x) ) . x q(x) log q(x) = log x ( q(x) ) Therefore, the lower bound for the log-likelihood is given by ( ) ? ? ? ?(?t + ? ?j,i,t ) ?t,wj,i ? ?( t ?t ) i ? ? ?j,i,t log F[q(z)] = + log . ?j,i,t ?(Nj + t ?t ) t ?(?t ) j,i,t j ? The maximum of F[q(z)] with respect to q(zj,i = t) = ?j,i,t and ? is given by ? ? ?j,i,t ? ?t,wj,i exp{?(?t + ?j,i,t )}, ?tv ? ? + nj,t,v , i (11) (12) j The updates of ? are the same as Eq.(5). Note that? we use the maximum a posteriori estiamtion for ?, however, we do not use ? ? 1 to avoid ? ? 1 + j nj,t,v taking a negative value. ? The lower bound F[q(z)] introduces only q(z) like CVB-LDA. Equation (12) incrementally updates the topic distribution of a document for each word as in CVB-LDA because we do not need ?j,i in Eq.(12) due to marginalizing out of ? j . Equation (12) is a fixed point update, whereas CVB-LDA can be interpreted as a coordinate ascent algorithm. ? and ? are updated from the entire document. That is, when we compare this algorithm with VB-LDA, it looks like a hybrid variant of a batch updates for ? and ?, and incremental updates for ? j , Here, we consider an incremental update for ? to be analogous to CVBLDA, in which ? is updated for each word. Note that in the LDA setup, each independent identically distributed data point is a document not a word. Therefore, we incrementally estimate ? for each document by swapping ?N statistics nj,t,v = i j ?j,i,t I(wj,i = v) which is the number of word v generated from topic t in document j. Algorithm 3 shows our incremental algorithm for LDA. This algorithm incrementally optimizes the lower bound in Eq.(11). 4 Algorithm 3 Incremental algorithm for LDA Algorithm 4 Single-pass algorithm for LDA 1: for iteration it = 1, ? ? ? , L do 2: for j = 1, ? ? ? , M do 3: for i = 1, ? ? ? , Nj do 4: Update ?j,i,t by Eq. (12) 5: end for new 6: Replace nold j,t,v with nj,t,v for v ? 1: for j = 1, ? ? ? , M do 2: for iteration it = 1, ? ? ? , l do 3: for i = 1, ..., Nj do 4: Update ?j,i,t by Eq. (13). 5: end for 6: Update ? (j) by Eq.(13). 7: Update ?(j) by Eq.(17). 8: end for 9: Update ?(j) by Eq.(14). 10: Update a ?(j) and ?b(j) by Eq.(17). 11: end for N j {wj,i }i=1 in ? of Eq. (12) . 7: end for 8: Update ? by Eq. (5) 9: end for 3.3 Single-Pass Algorithm for LDA Our single-pass algorithm for LDA was inspired by the Bayesian formulation, which internally includes a sequential update. The posterior distribution with the contribution from the data point N ?1 xN is separated out so that p(?|{xi }N i=1 ) ? p(xN |?)p(?|{xi }i=1 ), where ? denotes a parameter. This indicates that we can use a posterior given an observed datum as a prior for the next datum.. We use parameters learned from observed data as prior parameters for the next data. For example, ?M ?1 nj,t,v } + nM,t,v . Here, we can interpret ?t,v in Eq. (12) is represented as ?t,v ? {? + j ?M ?1 (M ?1) nj,t,v } as prior parameter ?t,v for the M -th document. {? + j Our single-pass algorithm sequentially sets a prior for each arrived document. By using this sequential setting of prior parameters, we present a single-pass algorithm for LDA as shown in Algorithm (j?1) 4. First, we update parameters from j-th arrived document given prior parameters {?t,v } for l iterations (j) (j) ?j,i,t ??t,wj,i exp{?(?t + ? (j) (j?1) ?j,i,t )}, ?t,v ? ?t,v + i (0) Nj ? ?j,i,t I(wj,i = v), (13) i (j) where ?t,v = ? and ?t is explained below. Then, we set prior parameters by using statistics from the document for the next document as follows, and finally discard the document. (j) (j?1) ?t,v =?t,v + Nj ? ?j,i,t I(wj,i = v). (14) i Since the updates are repeated within a document, we need to store statistics {?j,i,t } for each word in a document, but not for all words in all documents. In the CVB and iREM algorithms, the Dirichlet parameter, ?, uses batch updates, i.e., ? is updated by using the entire document once in one iteration. We need an online-update algorithm for ? to process a streaming text. However, unlike parameter ?t,v , the update of ? in Eq.(5) is not constructed by simply summarizing sufficient statistics of data and a prior. Therefore, we derive a single-pass update for the Dirichlet parameter ? using the following interpretation. We consider Eq.(5) to be the expectation of ?t over posterior G(?t |? at , ?b) given documents D and a ? ? 1 t prior G(?t |a0 , b0 ), i.e, ?tnew = E[?t ]G(?|?at ,?b) = , where ?b a ?t =a0 + M ? j aj,t , ?b = b0 + M ? bj , (15) j aj,t = {?(?told + nj,t ) ? ?(?told )}?told , bj = ?(Nj + ?0old ) ? ?(?0old ). 5 (16) We regard aj,t and bj as statistics for each document, which indicates that the parameters that we actually update are a ?t and ?b in Eq.(5). These updates are simple summarizations of aj,t and bj and (j) prior parameters a0 and b0 . Therefore, we have an update for ?t after observing document j given by (j) (j) ?t aj,t a ?t ? 1 (j) (j?1) , a ?t = a ?t + aj,t , ?b(j) = ?b(j?1) + bj , ?b(j) t (j?1) (j?1) (j?1) (j?1) (j?1) = {?(?t + nj,t ) ? ?(?t )}?t , bj = ?(Nj + ?0 ) ? ?(?0 ), = E[?t ]G(?|?a(j) ,?b(j) ) = (0) where a ?t (j?1) a ?t 3.4 (17) (18) = a0 and ?b(0) = b0 . and ?b(j?1) are used as prior paramters for the next j-th documents. Analysis This section analyze the proposed updates for parameters ? and ? in the previous section. We eventually update parameters ?(j) and ? (j) given document j as ?j?1 a0 ? 1 + d ad,t + aj,t bj aj,t ? (j) (j?1) = ?t (1 ? ?j? ) + ?j? , ?j = ?t = ?j?1 ?j . b j b0 + d bd + bj b0 + d bd (19) ?j?1 nd,t,v + nj,t,v nj,t,v ? (Vj ? Vj?1 )? + nj,t,? (j?1) d = ?t,v (1 ? ?j? ) + ?j? , ?j = . ? ?j j?1 n j,t,? Vj ? + d nd,t,? + nj,t,? Vj ? + d nd,t,? (20) ? where nt,? = v nt,v and Vj is the vocabulary size of total observed documents(d = 1, ? ? ? , j). Our single-pass algorithm sequentially sets a prior for each arrived document, and so we can select a prior (a dimension of Dirichlet distribution) corresponding to observed vocabulary. In fact, this property is useful for our problem because the vocabulary size is growing in the text stream. These updates indicate that ?j? and ?j? interpolate the parameters estimated from old and new data. These updates look like a stepwise algorithm (H.Robbins and S.Monro, 1951; Sato and Ishii, 2000), although a stepsize algorithm interpolates sufficient statistics whereas our updates interpolate parameters. In our updates, how we set the stepsize for parameter updates is equivalent to how we set the hyperparameters for priors. Therefore, we do not need to newly introduce a stepsize parameter. (j) ?t,v = ?+ In our update of ?, the appearance rate of word v in topic t in document j, nj,t,v /nj,t,? , is added (j?1) to old parameter ?t,v with weight ?j? , which gradually decreases as the document is observed. The same relation holds for ?. Therefore, the influence of new data decreases as the number of document observations increases as shown in Theorem 1. Moreover, Theorem 1 is an important role in analyzing the convergence of parameter updates by using the super-martingale convergence theorem (Bertsekas and Tsitsiklis, 1996; Brochu et al., 2004). This convergence analysis is our future work. Theorem 1. If ? and ? exist satisfying 0 < ? < Sj < ? for any j, ?j = ?+ Sj ?j d (21) Sd satisfies lim ?j = 0, j?? ? ? ?j = ?, j ? ? ?j2 < ? (22) j Note that ?j? and ?j? are shown as ?j given by Eq. (21). The proof is given in the supporting material. 6 4 Experiments We carried out experiments on document modeling in terms of perplexity. We compared the inferences for LDA in two sets of text data. The first was ?Associated Press(AP)? where the number of documents was M = 10, 000 and the vocabulary size was V = 67, 291. The second was ?The Wall Street Journal(WSJ)? where M = 10, 000 and V = 56, 738. The ordering of document is timeseries. The comparison metric for document modeling was the ?test set perplexity?. We randomly split both data sets into a training set and a test set by assigninig 20% of the words in each document to the test set. Stop words were eliminated in datasets. We performed experiments on six inferences, PF, VB, CVB0, CVB, iREM and sREM. PF denotes the particle filter for LDA used in Canini et al. (2009). We set ?t as 50/T in PF. The number of particles, denoted by P , is 64. The number of words for resampling, denoted by R, is 20. The effective sample size (ESS) threshold, which controls the number of resamplings, is set at 10. CVB0 and CVB are collapsed variational inference for LDA using zero-order and second-order information, respectively. iREM represents the incremental reverse EM algorithm in Algorithm 3. CVB0 and CVB estimates the Dirichlet parameter ? over the topic distribution for all datasets, i.e., a batch framework. We estimated ? in iREM for all datasets like CVB to clarify the properties of iREM compared with CVB. L denotes the number of iterations for whole documents in Algorithms 1 and 2. sREM indicates a single-pass variants of iREM in Algorithm 4. l denotes the number of iterations within a document in Algorithm 4. sREM does not make iterations for whole documents. Figure 2 demonstrates the results of experiments on the test set perplexity where lower values indicates better performance. We ran experiments five times with different random initializations and show the averages 1 . PF and sREM calculate the test set perplexity after sweeping through all traing set. VB converges slower than CVB and iREM. Moreover, iREM outperforms CVB in the convergence rate. Although CVB0 outperforms other algorithms for the cases of low number of topics, the convergence rate of CVB0 depends on the number of topics. sREM does not outperform iREM in terms of perplexities, however, the performance of sREM is close to that of iREM As a results, we recommend sREM in a large number of documents or document streams. sREM does not need to store old statistics for all documents unlike other algorithms. In addition, the convergence of sREM depends on the length of a document, rather than the number of documents. Since we process each document individually, we can control the number of iterations corresponding to the length of each arrived document. Finally, we discuss the running time. The running time of sREM is O( Ll ) times shorter than that of VB, CVB0, CVB and iREM. The averaged running times of PF(T=300,P=64,R=20) are 28.2 hours in AP and 31.2 hours in WSJ. Those of sREM(T=300,l=5) are 1.2 hours in AP and 1.3 hours in WSJ. 5 Conclusions We developed a deterministic online-learning algorithm for latent Dirichlet allocation (LDA). The proposed algorithm can be applied to excess text data in a continuous stream because it processes received documents one at a time and then discard them. The proposed algorithm was much faster than a batch algorithm and was comparable to the batch algorithm in terms of perplexity in experiments. 1 We exclude the error bar with standard deviation because it is so small that it is hidden by the plot markers 7 AP 3000 VB(L=100) CVB0(L=100) 2500 CVB(L=100) iREM(L=100) 2000 sREM(l=5) 1500 50 100 150 200 250 300 Testset Perplexity Testset Perplexity 3500 2900 WSJ 2700 VB(L=100) 2500 2300 CVB0(L=100) 2100 CVB(L=100) 1900 iREM(L=100) 1700 sREM(l=5) 1500 PF(P=64) 50 100 Number of Topics 250 300 PF(P=64) (b) AP(T=100) 4.00E+03 VB 3.50E+03 CVB0 3.00E+03 CVB 2.50E+03 iREM 2.00E+03 WSJ(T=100) 4.00E+03 Testset Perplexity Testset Perplexity 200 Number of Topics (a) 4.50E+03 150 3.50E+03 VB 3.00E+03 CVB0 2.50E+03 CVB iREM 2.00E+03 sREM sREM 1.50E+03 1.50E+03 PF PF 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Number of itera!ons Number of itera!ons (d) AP(T=300) 5.50E+03 5.00E+03 4.50E+03 4.00E+03 3.50E+03 3.00E+03 2.50E+03 2.00E+03 1.50E+03 WSJ(T=300) 4.50E+03 Testset Perplexity Testset Perplexity (c) VB CVB0 CVB iREM 4.00E+03 VB 3.50E+03 CVB0 3.00E+03 CVB 2.50E+03 iREM 2.00E+03 sREM sREM 1.50E+03 PF PF 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Number of itera!ons Number of itera!ons (f) AP WSJ 2380 Testset Perplexity Testset Perplexity (e) 2500 2400 2300 2200 2100 2000 1900 1800 1700 sREM(l=3) sREM(l=4) sREM(l=5) 2280 sREM(l=3) 2180 2080 sREM(l=4) 1980 sREM(l=5) 1880 1780 1680 50 100 150 200 250 300 50 Number of Topics 100 150 200 250 300 Number of Topics (g) (h) Figure 2: Results of experiments. Left line indicates the results in AP corpus. Right line indicates the results in WSJ corpus. (a) and (b) compared test set perplexity with respect to the number of topics. (c), (d), (e) and (f) compared test set perplexity with respect to the number of iterations in topic T = 100 and T = 300, respectively. (g) and (h) show the relationships between test set perplexity and the number of iterations within a document, i.e., l. References Loulwah Alsumait, Daniel Barbara, and Carlotta Domeniconi. On-line lda: Adaptive topic models for mining text streams with applications to topic detection and tracking. IEEE International Conference on Data Mining, 0:3?12, 2008. ISSN 1550-4786. 8 A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh. On smoothing and inference for topic models. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence, 2009. Arindam Banerjee and Sugato Basu. Topic models over text streams: A study of batch and online unsupervised learning. In SIAM International Conference on Data Mining, 2007. D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, 2003. Eric Brochu, Nando de Freitas, and Kejie Bao. Owed to a martingale: A fast bayesian on-line em algorithm for multinomial models, 2004. Kevin R. Canini, Lei Shi, and Thomas L. Griffiths. Online inference of topics with latent dirichlet allocation. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, 2009. H.Robbins and S.Monro. A stochastic approximation method. In Annals of Mathematical Statistics, pages 400?407, 1951. Thomas P. Minka. Estimating a dirichlet distribution. Technical report, Microsoft, 2000. URL http://research.microsoft.com/?minka/papers/dirichlet/ minka-dirichlet.pdf. Thomas P. Minka. Using lower bounds to approximate integrals. Technical report, Microsoft, 2001. URL http://research.microsoft.com/en-us/um/people/ minka/papers/rem.html. R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer, 1998. URL http: //citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.2557. Masa A. Sato and Shin Ishii. On-line em algorithm for the normalized gaussian network. Neural Computation, 12(2):407?432, 2000. URL http://citeseerx.ist.psu.edu/ viewdoc/summary?doi=10.1.1.37.3704. Yee Whye Teh, David Newman, and Max Welling. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In Advances in Neural Information Processing Systems 19, 2007. Hanna Wallach, David Mimno, and Andrew McCallum. Rethinking lda: Why priors matter. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1973?1981. 2009. Limin Yao, David Mimno, and Andrew McCallum. Efficient methods for topic model inference on streaming document collections. In KDD ?09: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 937?946, New York, NY, USA, 2009. ACM. ISBN 978-1-60558-495-9. 9
3959 |@word briefly:1 seems:1 nd:3 twelfth:1 series:1 daniel:1 document:59 outperforms:2 existing:2 sugato:1 freitas:1 com:3 nt:6 si:5 gmail:1 attracted:1 bd:2 kdd:1 plot:1 update:52 resampling:2 generative:1 intelligence:2 devising:1 mccallum:2 es:1 short:1 blei:3 provides:1 five:1 mathematical:1 constructed:2 issei:1 introduce:1 growing:3 rem:3 inspired:2 inappropriate:1 pf:11 increasing:1 itera:4 provided:1 estimating:2 moreover:5 notation:1 bounded:1 factorized:1 interpreted:1 developed:2 nj:39 ti:1 um:1 demonstrates:1 control:2 internally:1 bertsekas:2 sd:1 analyzing:1 ap:8 initialization:1 wallach:2 averaged:1 carlotta:1 shin:1 word:21 griffith:1 close:1 collapsed:5 influence:1 yee:1 equivalent:1 deterministic:5 shi:1 williams:1 attention:1 snew:2 coordinate:1 analogous:1 updated:6 annals:1 smyth:1 programming:1 us:1 satisfying:1 observed:7 role:1 calculate:1 wj:26 culotta:1 news:2 ordering:1 decrease:2 ran:1 traing:1 dynamic:1 eric:1 po:1 represented:2 derivation:1 separated:1 fast:1 describe:1 effective:1 hiroshi:1 artificial:2 doi:2 newman:1 kevin:1 widely:1 say:1 statistic:19 online:11 isbn:1 propose:3 subtracting:2 j2:1 bao:1 convergence:7 optimum:1 generating:1 incremental:28 converges:2 wsj:8 derive:1 develop:1 ac:2 andrew:2 received:2 b0:10 eq:22 involves:1 indicate:1 tokyo:4 filter:3 stochastic:1 nando:1 material:1 explains:1 wall:1 summation:1 clarify:1 hold:1 exp:5 great:1 bj:8 purpose:1 outperformed:1 robbins:2 individually:1 viewdoc:2 gaussian:1 super:1 rather:1 avoid:1 derived:2 focus:1 likelihood:3 fq:1 indicates:6 ishii:2 sigkdd:1 summarizing:2 posteriori:1 inference:21 streaming:4 entire:2 a0:9 hidden:1 relation:1 among:2 html:1 denoted:2 smoothing:1 once:1 ng:1 sampling:6 eliminated:1 psu:2 represents:2 look:2 unsupervised:2 future:1 report:2 recommend:1 randomly:1 gamma:2 interpolate:2 microsoft:4 cvb:32 detection:1 huge:1 mining:4 introduces:2 mixture:1 arrives:1 masa:1 swapping:1 tj:1 integral:1 shorter:1 old:18 taylor:1 alsumait:2 modeling:2 introducing:2 deviation:1 usefulness:1 dir:4 international:5 siam:1 probabilistic:1 told:6 yao:2 nm:1 adversely:1 japan:2 exclude:1 de:1 includes:1 matter:1 ad:1 stream:9 depends:2 performed:1 view:1 lot:1 doing:1 observing:1 analyze:1 start:1 bayes:2 parallel:2 complicated:1 asuncion:4 monro:2 contribution:1 bayesian:3 basically:1 tnew:1 nold:1 minka:8 proof:1 associated:1 stop:1 newly:1 lim:1 knowledge:1 schedule:1 brochu:2 actually:1 formulation:2 replacing:3 banerjee:2 marker:1 google:1 incrementally:3 lda:66 aj:8 indicated:1 lei:1 scientific:1 usa:1 usage:1 normalized:1 neal:2 deal:2 ll:1 ulti:2 whye:1 pdf:1 arrived:4 complete:1 variational:9 arindam:1 multinomial:4 overview:3 jp:2 extend:1 interpretation:1 kluwer:1 interpret:1 gibbs:2 particle:4 posterior:5 optimizing:1 optimizes:3 barbara:1 discard:5 perplexity:19 store:7 resamplings:1 ship:1 inequality:1 blog:2 reverse:3 life:1 vt:1 converge:1 semi:1 technical:2 faster:5 characterized:1 long:1 post:2 variant:4 neuro:1 itc:2 expectation:1 metric:1 iteration:15 represent:1 sometimes:1 whereas:3 want:1 addition:1 unlike:2 ascent:1 lafferty:1 seem:1 jordan:2 split:1 easy:1 identically:1 bengio:1 affect:1 six:1 url:4 interpolates:1 york:1 useful:3 http:4 specifies:2 outperform:1 exist:1 zj:7 estimated:2 paramters:1 ist:2 threshold:1 run:1 uncertainty:1 draw:4 vb:21 comparable:2 bound:7 datum:3 sato:4 n3:1 citeseerx:2 tv:1 kenichi:2 describes:1 slightly:1 em:7 explained:1 gradually:1 equation:4 discus:1 eventually:1 end:12 stepsize:3 batch:14 slower:1 thomas:3 denotes:8 assumes:1 dirichlet:19 running:4 graphical:1 marginalized:1 especially:1 already:1 quantity:3 added:1 rethinking:1 street:1 athena:1 me:1 topic:31 reason:1 length:2 issn:1 relationship:2 setup:1 negative:1 summarization:1 unknown:1 teh:5 observation:1 datasets:3 discarded:1 sold:2 timeseries:1 canini:3 supporting:1 situation:1 extended:1 hinton:2 sweeping:2 introduced:1 david:3 required:1 learned:1 hour:4 bar:1 below:2 max:1 memory:3 hybrid:1 indicator:1 scheme:2 carried:1 text:12 prior:17 discovery:1 marginalizing:1 allocation:7 sufficient:6 article:2 editor:2 storing:1 summary:3 repeat:1 arriving:2 tsitsiklis:2 basu:2 taking:1 sparse:1 distributed:1 regard:1 mimno:2 dimension:1 vocabulary:9 xn:2 made:1 adaptive:1 collection:1 testset:8 welling:2 excess:2 sj:2 approximate:1 ons:4 sequentially:2 corpus:2 xi:2 continuous:3 latent:9 why:1 schuurmans:1 hanna:1 expansion:1 vj:5 whole:4 motivation:1 hyperparameters:1 repeated:1 fig:1 en:1 martingale:2 ny:1 theorem:4 jensen:1 dl:2 intractable:1 stepwise:1 sequential:3 adding:1 importance:1 justifies:1 simply:2 appearance:1 limin:1 tracking:1 satisfies:1 acm:2 replace:1 feasible:1 experimentally:1 nakagawa:1 infinite:1 kurihara:2 sampler:1 total:2 domeniconi:1 pas:15 experimental:1 select:1 owed:1 people:1 handling:1
3,267
396
A VLSI Neural Network for Color Constancy Andrew Moore Geoffrey Fox? Computation and Neural Systems Program, 116-81 Dept. of Physics California Institute of Technology California Institute of Technology Pasadena, CA 91125 Pasadena, CA 91125 John Allman Dept. of Biology, 216-76 California Institute of Technology Pasadena, CA 91125 Rodney Goodman Dept. of Electrical Engineering, 116-81 California Institute of Technology Pasadena, CA 91125 Abstract A system for color correction has been designed, built, and tested successfully; the essential components are three custom chips built using subthreshold analog CMOS VLSI. The system, based on Land's Retinex theory of color constancy, produces colors similar in many respects to those produced by the visual system. Resistive grids implemented in analog VLSI perform the smoothing operation central to the algorithm at video rates. With the electronic system, the strengths and weaknesses of the algorithm are explored. 1 A MODEL FOR COLOR CONSTANCY Humans have the remarkable ability to perceive object colors as roughly constant even if the color of the illumination is varied widely. Edwin Land, founder of the Polaroid Corporation, models the computation that results in this ability as three identical center-surround operations performed independently in three color planes, such as red, green, and blue (Land, 1986). The basis for this model is as follows. Consider first an array of grey papers with different reflectances. (Land designated these arrays Mondrians, since they resemble the works of the Dutch painter Piet ?Present address: Dept. of Physics, Syracuse University, Syracuse, NY 13244 370 A VLSI Neural Network for Color Constancy Mondrian.) Land illuminated a Mondrian with a gradient of illumination, ten times more bright at the top than at the bottom, so that the flux reaching the eye from a dark grey patch at top was identical to the flux from a light grey patch at bottom. Subjects reported that the top paper was dark grey and the bottom paper was light grey. Land accounted for this with a center minus surround model. At each point in an image, the incoming light is compared to a spatial average of light in the neighborhood of the point in question. Near the top of the Mondrian, the abundance of white is sensed and subtracted from the central sensor to normalize the central reading with respect to neighboring values, weighted with distance; near the bottom, the abundance of dark is sensed and used to correct the central reading. Land proposed that the weighting function of the surround is a monotonic decreasing function of distance, such as l/r2. In earlier work, similar experiments were carried out with color Mondrians (Land, 1977; McCann et. al., 1976). However, instead of varying the intensity of illumination, Land and his colleagues varied the color of the illumination. The color of patches in a Mondrian remained nearly constant despite large changes in the illuminant color. This is the phenomenon of color constancy: the ability of observers to judge, under a wide variety of lighting conditions, the approximate reflectance or intrinsic color of objects. Land and his colleagues proposed a variety of different models for this phenomenon, collectively referred to as Retinex models. (The term Retinex was coined by Land since he was not sure whether the computation was going on in the retina, the cortex, or both.) In his most recent paper on the subject (Land, 1986), Land simply extended the black-and-white model to the three color dimensions. In each of three independent color planes, the color at a given point is compared to that of the points surrounding it, weighted as 1/ r2. 2 EFFICIENT CALCULATION OF THE SURROUND In practical terms, the Retinex algorithm corresponds to subtracting from an image a blurred version of itself. The distance weighting (type of blurring) Land proposes varies as l/r2, so the operation is a center minus surround operation, where the surround is the center convolved with a l/r2 kernel. (1) where Ii is the signal or lightness in color plane i, and I~ is the log of the signal. The logs are important since the signal is composed of illuminant times reflectance and the log of a product is a sum. By subtracting the blurred version of the image after taking logs, the illuminant is subtracted away in the ideal case (but see below) . This type of Retinex algorithm, then, has a psychophysical basis and sound computational underpinnings (Hurlbert, 1986). But the complexity is too great. Since the required surround is so large, such a convolution across an N xN pixel image entails on the order of N 4 operations. On a chip, this corresponds to explicit connections from each pixel to most if not all other pixels. A similar operation can be carried out much more efficiently by switching from 371 372 Moore, Allman, lOx, and Goodman a convolution to a resistive grid calculation. The operations are similar since the weighting of neighboring points (Green's function) in a resistive grid decreases in the limit as the exponential of the distance from a given location on a resistive grid (Mead, 1989). Again, the kernel is a monotonic decreasing function. With this type of kernel, the operation in each Retinex (color channel) is (2) where A is the length constant or extent of weighting in the grid. Since the calculation is purely local, the complexity is reduced dramatically from O(N 4 ) to O(N2). On a chip, a local computation corresponds to connections only between nearestneighbor pixels. 3 3.1 EVALUATION OF THE ALGORITHM WITH COMPUTER SIMULATIONS STRENGTHS AND WEAKNESSES OF THE ALGORITHM Images of a subject holding a color poster were captured under fluorescent and incandescent light with an RGB video camera and a 24 bit frame grabber. First, the camera was adjusted so that the color looked good under fluorescent light. Next, without readjusting the camera, the fluorescents were turned off and the subject was illuminated with incandescent light. The results were unacceptable. The skin color was very red, and, since the incandescent lamp was not very bright, the background was lost in darkness. The two images were processed with the Land algorithm, using resistive grids to form the surround for subtraction. Details of the simulations and color images can be found in (Moore et. ai, 1991). For the good, fluorescent image, the processing improved the image contrast somewhat. For the poor, incandescent image, the improvement was striking. Skin color was nearly normal, shadows were softened, and the the background was pulled out of darkness. Computer simulation also pointed out two weaknesses of the algorithm: color Mach bands and the greying out of large monochromatic regions. Color Mach bands arise from this algorithm in the following way. Suppose that a strongly colored region, e.g. red, abuts a grey region. In the grey region, the surround subtracted at a given point has a strong red component. Therefore, after subtraction of the surround, a grey point is rendered as grey minus red, or equivalently, grey plus the complementary color of red, which is blue-green. Since the surround weighting decreases with distance, the points in the image closest to the red area are strongly tinged with blue-green, while points further away are less discolored. Induction of this sort in black-and-white images is known as the Mach band effect. An analogous induction effect in color is intrinsic to this algorithm. Greying out of large colored areas is also an intrinsic weakness of the algorithm. The surrounds used in the simulations are quite large, with a length constant of nearly one third of the image. Often a large portion of an image is of a single color, e.g. a blue sky commonly fills the upper half of many natural scenes. In the sky region, the surround samples mostly blue, and with subtraction, blue is subtracted from blue, leaving a grey sky. This effect illustrates the essence of the algorithm A VLSI Neural Network for Color Constancy - it operates under a grey world assumption. The image for which this algorithm is ideal is richly colored, with reds and their green complements, yellows and their blue complements, and whites with their black complements. In such images, the large surround is sampling the color of a grey "mirror", since the sum of a color and its complement is grey. If this condition holds, the color subtracted when the surround is subtracted from a point in the image is the color of the illuminant; the surround acts as a dull grey mirror which reflects the illuminant. [Many color constancy schemes rely on this assumption; for a review see (Lennie and D'Zmura, 1988).] 3.2 AN EXTENSION TO THE LAND ALGORITHM These two weaknesses arise from too much surround subtraction in solidly colored areas. One way the minimize the effects is to modulate the surround with a measure of image structure, which we call edginess, before subtraction. So, while for the original algorithm, the operation is output center - surround, to ameliorate induction effects and lessen reliance on the grey world assumption, the surround weight should be modified pointwise. In particular, if edginess is given a value close to zero in homogeneous regions like the blue sky, and is given a value close to one in detailed areas, a better formulation is output center - surround . edginess. In this relation, the surround is effectively zeroed in smooth areas before it is subtracted, so that induction is diminished - more of the original color is retained. The extended algorithm, then, is a working compromise between color constancy via strict application of the grey world assumption and no color constancy at all. To compute a measure of spatial structure, the average magnitude of the first spatial derivatives is found at each point in each color plane is smoothed on a resistive grid; the output at a given point is multiplied with the surround value from the corresponding point of first resistive grid. In our simulations, the modified algorithm reduces (but does not eliminate) color Mach bands, and returns color to large monochromatic regions such as the the sky in the example image discussed above, at the cost of one additional resistive grid per color channel. This extension is not the whole answer, however. If a large region is highly textured (for example, if there is a flock of birds in the sky), edginess is high, the surround is subtracted at near full strength, and the sky is rendered grey in the textured region. This is a subject of continuing research. We implemented the original algorithm, but not this extension of it, using analog VLSI. = = 4 VLSI IMPLEMENTATION OF THE RETINEX ALGORITHM To realize a real-time electronic system of video camera color correction based on Land's algorithm, the three color outputs of a video camera are fed onto three separate resistive grids built from subthreshold analog CMOS VLSI. Each 48 by 47 node resistive grid was built using 2 micron design rules and contains about 60,000 transistors. The circuit details within each pixel are similar to those of the analog retina (Mead, 1989); technical details of the system may be found in (Moore et.al., 1991). 373 374 Moore, Allman, fux, and Goodman Computer simulations are quite costly in terms of time and disk storage. With a real-time system, it is possible to intensively investigate the strengths and weaknesses of this color correction algorithm quickly and economically. 4.1 4.1.1 REAL-TIME VERIFICATION OF ALGORITHM STRENGTHS Dynamic range enhancement A common problem with video imaging is that the range of an image exceeds the dynamic range of the camera sensors. For example, consider an image comprised of an indoor scene and an outdoor scene viewed through a window. The indoor illumination (e.g., direct sunlight) can be one thousand times or more brighter than the indoor illumination (e.g., artificial lights or indirect sunlight). A video camera can only capture one portion of the scene with fidelity. By opening up the camera iris so that a lot of light falls on the camera sensors, the indoor scene looks good, but the outdoor scene is awash in white. Conversely, by closing the camera iris so that less light falls on the camera sensors, the outdoor scene looks good, but the indoor scene is rendered as deep black. In fact, the image information is often not lost in this troublesome situation. Most sensors are not linear, but instead have a response function that resembles a hyperbolic tangent. Rather than saturating at the extremes of the response range, most sensors compress information near those response extremes. With a center-surround processing stage following a camera, the information "squashed" near the camera range limits can be recovered. In extremely bright portions of an image, white is subtracted from white, "pulling" the signal toward the mid-range, so that details in that portion of the scene become defined. Similarly, in dark portions of the scene, dark is subtracted from dark and the details of the indoor portion of the example image are visible. Thus the Land algorithm as applied to video imaging can enhance the dynamic range of video cameras. [This strength of the algorithm was predicted from the similar capability of the (black-and-white) silicon retina (Mead, 1989) - it has a dynamic range that exceeds by far the range of conventional cameras since it incorporates light sensors and center-surround processing on one chip.] 4.1.2 Color constancy For a richly colored scene, the Land algorithm can remove strongly colored illumination, with some qualifications. We constructed a color Mondrian with many differently colored patches of paper, and illuminated it with ordinary fluorescent light plus various colored lights. Under a wide range of conditions, the color of the Mondrian as viewed on a video monitor changes with the illumination while it looks fairly stable to an observer. After passing the images through the electronic color compensation system, the image is also fairly stable for a wide variety of illumination conditions. There is a significant difference, however, between what an observer sees and what the corrected camera image reports. The video images passed through the electronic implementation of the Land algorithm take on the illuminant somewhat in portions of the image that are brighter than average, and take on the complementary color of the illuminant in portions that are darker than average. For example, for a blue illumination, the raw video image looks bluer all over. The processed image changes in a different way. White patches are faintly A VLSI Neural Network for Color Constancy blue (much less as compared to the raw image), and black patches (which remain black in the raw image) are tinged with yellow. There is psychophysical evidence that the same effects are noted by human observers (see Jameson and Hurvich, 1989, for a review), but they are much less pronounced than those produced by the Land algorithm in our experience. Still, the overall effect of constancy in the processed images is convincing as compared to the raw images. 4.2 REAL-TIME VERIFICATION OF ALGORITHM WEAKNESSES 4.2.1 Color Mach bands and greying of large regions To our surprise, the color Mach band effect, explained above, is less pronounced than we expected; for many scenes the induction effects are not noticeable. It is possible to the see the Mach bands clearly by placing colored cards on a grey background - the complementary color of the card surrounds the card as a halo that diminishes with distance from the card. Since the Retinex algorithm relies on the grey world assumption, the algorithm fails where this assumption fails to hold. With the real-time system, we have demonstrated this in many ways. For example, if the video camera is pointed at the color Mondrian and the hand of a Caucasian investigator (with a reddish skin tone) is slowly moved in front of the camera lens, the Mondrian in the background slowly grows more green. Green is the complementary color of red. Another example of practical importance is revealed by zooming in on a particular patch of the Mondrian. As more and more of the image is filled with this patch, the patch grows greyer and greyer, because the correction system subtracts the patch color from itself. 4.2.2 Scene dependence of color constancy As described above, we were impressed with this algorithm after simulating it on a digital computer. The skin tone of a subject, deeply reddened by incandescent light, was dramatically improved by the algorithm. In the computer study, the subject's face was, by accident rather than design, just in the middle of a large white patch and a large black patch. The electronic system yields perfect constancy of skin tone with this configuration also, but not for an arbitrary configuration. In short, the color constancy afforded by this algorithm is scene dependent; to consistently produce perfect color constancy of an object with the real-time system, it is necessary to place the object carefully within a scene. We are still investigating this weakness of the algorithm. Whether it is camera dependent (i.e., the result of camera nonlinearities) remains to be seen. 5 Conclusion After studying the psychophysics and the computational issues in color constancy, encouraging preliminary results for a particular version of Land's Retinex algorithm were obtained in computer simulation. In order to study the algorithm intensively, an electronic system was developed; the system uses three resistive grids built from 375 376 Moore, Allman, fux, and Goodman subthreshold analog CMOS VLSI to form a blurred version of the image for subtraction from the original. It was found that the system produces images that are more constant, in a sense, than raw video images when the illuminant color varies. However, the constancy is more apparent than real; if absolute constancy of a particular object is desired, that object must be carefully placed in its surroundings. The real-time system allowed us to address this and other such practical issues of the algorithm for the first time. Acknowledgements We are grateful to many of our colleagues at Cal tech and elsewhere for discussions and support in this endeavor. A.M. was supported by fellowships from the Parsons Foundation and the Pew Charitable Trust and by research assistantships from Office of Naval Research, the Joint Tactical Fusion Program and the Center for Research in Parallel Computation. We are grateful to DARPA for MOSIS fabrication services, and to Hewlett Packard for computing support in the Mead Lab. The California Institute of Technology has filed for a U.S. patent for this and other related work. References A. Hurlbert. (1986) Formal connections between lightness algorithms. J. Opt. Soc. Am. A3: 1684-1693. D. Jameson & L.M. Hurvich (1989). Essay concerning color constancy. Ann. Rev. Psych 01. 40:1-22. E.H. Land. (1977) The Retinex theory of color vision. Scientific American 237:108128. E.H. Land. (1986) An alternative technique for the computation of the designator in the retinex theory of color vision. Proc. Natl. Acad. Sci. USA 83:3078-3080. P. Lennie & M. D'Zmura. (1988) Mechanisms of color vision. eRG Grit. Rev. Neurobiol. 3(4):333-400. J.J. McCann, S.P. McKee, & T.H. Taylor. (1976) Quantitative studies in Retinex theory. Vision Res. 16:445-458. C.A. Mead. (1989) Analog VLSI and Neural Systems. Reading, MA: AddisonWesley. A. Moore, J. Allman, & R. Goodman. (1991) A Real-time Neural System for Color Constancy. IEEE Trans. Neural Networks 2(2) In press.
396 |@word economically:1 middle:1 version:4 disk:1 grey:20 essay:1 simulation:7 sensed:2 rgb:1 minus:3 configuration:2 contains:1 recovered:1 must:1 john:1 realize:1 visible:1 remove:1 designed:1 half:1 caucasian:1 tone:3 plane:4 lamp:1 short:1 colored:9 node:1 location:1 unacceptable:1 constructed:1 direct:1 become:1 resistive:11 mccann:2 expected:1 roughly:1 decreasing:2 encouraging:1 window:1 circuit:1 what:2 neurobiol:1 psych:1 developed:1 corporation:1 sky:7 quantitative:1 act:1 before:2 service:1 engineering:1 local:2 qualification:1 limit:2 switching:1 despite:1 mach:7 troublesome:1 acad:1 mead:5 black:8 plus:2 bird:1 resembles:1 nearestneighbor:1 conversely:1 range:10 practical:3 camera:20 lost:2 impressed:1 area:5 hyperbolic:1 poster:1 onto:1 close:2 cal:1 storage:1 darkness:2 conventional:1 demonstrated:1 center:9 independently:1 perceive:1 rule:1 array:2 fill:1 erg:1 piet:1 his:3 analogous:1 suppose:1 homogeneous:1 us:1 bottom:4 constancy:21 electrical:1 capture:1 thousand:1 region:10 decrease:2 deeply:1 complexity:2 dynamic:4 grateful:2 mondrian:11 compromise:1 purely:1 blurring:1 basis:2 edwin:1 textured:2 joint:1 indirect:1 chip:4 differently:1 various:1 darpa:1 surrounding:1 artificial:1 neighborhood:1 quite:2 apparent:1 widely:1 ability:3 itself:2 transistor:1 subtracting:2 product:1 neighboring:2 turned:1 moved:1 pronounced:2 normalize:1 enhancement:1 produce:3 cmos:3 perfect:2 object:6 andrew:1 noticeable:1 strong:1 soc:1 implemented:2 predicted:1 resemble:1 judge:1 shadow:1 correct:1 human:2 preliminary:1 opt:1 adjusted:1 extension:3 correction:4 hold:2 normal:1 great:1 diminishes:1 proc:1 successfully:1 weighted:2 reflects:1 clearly:1 sensor:7 modified:2 reaching:1 rather:2 varying:1 office:1 naval:1 improvement:1 consistently:1 tech:1 contrast:1 sense:1 am:1 lennie:2 dependent:2 eliminate:1 pasadena:4 vlsi:11 relation:1 going:1 pixel:5 overall:1 fidelity:1 issue:2 proposes:1 smoothing:1 spatial:3 fairly:2 psychophysics:1 sampling:1 biology:1 identical:2 placing:1 look:4 nearly:3 report:1 opening:1 retina:3 surroundings:1 composed:1 lessen:1 zmura:2 highly:1 investigate:1 custom:1 evaluation:1 weakness:8 extreme:2 light:14 hewlett:1 natl:1 underpinnings:1 necessary:1 experience:1 fox:1 filled:1 addisonwesley:1 continuing:1 taylor:1 desired:1 re:1 earlier:1 ordinary:1 cost:1 comprised:1 fabrication:1 too:2 front:1 reported:1 answer:1 varies:2 filed:1 physic:2 off:1 enhance:1 quickly:1 again:1 central:4 slowly:2 american:1 derivative:1 return:1 nonlinearities:1 tactical:1 blurred:3 performed:1 observer:4 lot:1 lab:1 red:9 portion:8 sort:1 capability:1 parallel:1 rodney:1 minimize:1 painter:1 bright:3 efficiently:1 subthreshold:3 yield:1 yellow:2 raw:5 produced:2 lighting:1 colleague:3 hurlbert:2 richly:2 intensively:2 color:68 carefully:2 response:3 improved:2 formulation:1 strongly:3 bluer:1 just:1 stage:1 working:1 hand:1 trust:1 scientific:1 grows:2 pulling:1 usa:1 effect:9 dull:1 moore:7 white:10 essence:1 noted:1 iris:2 image:39 common:1 mckee:1 patent:1 analog:7 he:1 discussed:1 silicon:1 significant:1 reddish:1 surround:27 ai:1 pew:1 grid:12 similarly:1 pointed:2 closing:1 flock:1 stable:2 entail:1 cortex:1 closest:1 recent:1 sunlight:2 captured:1 grabber:1 additional:1 somewhat:2 seen:1 accident:1 subtraction:6 signal:4 ii:1 full:1 sound:1 reduces:1 smooth:1 technical:1 exceeds:2 calculation:3 concerning:1 vision:4 dutch:1 kernel:3 background:4 fellowship:1 leaving:1 goodman:5 sure:1 strict:1 subject:7 monochromatic:2 incorporates:1 call:1 allman:5 near:5 ideal:2 revealed:1 variety:3 brighter:2 whether:2 passed:1 passing:1 deep:1 dramatically:2 detailed:1 dark:6 mid:1 ten:1 band:7 processed:3 reduced:1 per:1 blue:11 reliance:1 monitor:1 imaging:2 mosis:1 sum:2 micron:1 striking:1 ameliorate:1 place:1 electronic:6 patch:12 illuminated:3 bit:1 strength:6 scene:15 afforded:1 extremely:1 rendered:3 softened:1 designated:1 poor:1 across:1 remain:1 founder:1 rev:2 explained:1 remains:1 mechanism:1 fed:1 studying:1 operation:9 multiplied:1 away:2 simulating:1 subtracted:10 alternative:1 convolved:1 original:4 compress:1 top:4 coined:1 reflectance:3 psychophysical:2 skin:5 question:1 looked:1 squashed:1 costly:1 dependence:1 gradient:1 distance:6 separate:1 card:4 zooming:1 sci:1 extent:1 toward:1 induction:5 length:2 pointwise:1 retained:1 convincing:1 equivalently:1 mostly:1 holding:1 implementation:2 design:2 perform:1 upper:1 convolution:2 jameson:2 compensation:1 fux:2 situation:1 extended:2 halo:1 incandescent:5 frame:1 varied:2 smoothed:1 arbitrary:1 intensity:1 syracuse:2 complement:4 required:1 connection:3 california:5 trans:1 address:2 below:1 indoor:6 reading:3 program:2 built:5 green:7 packard:1 video:13 natural:1 rely:1 scheme:1 technology:5 eye:1 lightness:2 carried:2 lox:1 review:2 acknowledgement:1 tangent:1 fluorescent:5 geoffrey:1 remarkable:1 digital:1 foundation:1 verification:2 zeroed:1 charitable:1 land:24 elsewhere:1 accounted:1 placed:1 supported:1 formal:1 pulled:1 institute:5 wide:3 fall:2 taking:1 face:1 absolute:1 dimension:1 xn:1 world:4 commonly:1 subtracts:1 far:1 flux:2 approximate:1 incoming:1 investigating:1 channel:2 ca:4 parson:1 whole:1 arise:2 n2:1 allowed:1 complementary:4 referred:1 ny:1 darker:1 fails:2 explicit:1 exponential:1 outdoor:3 weighting:5 third:1 abundance:2 remained:1 explored:1 r2:4 evidence:1 fusion:1 essential:1 intrinsic:3 a3:1 effectively:1 importance:1 mirror:2 magnitude:1 illumination:10 illustrates:1 surprise:1 simply:1 visual:1 saturating:1 monotonic:2 collectively:1 corresponds:3 relies:1 ma:1 modulate:1 viewed:2 endeavor:1 ann:1 change:3 diminished:1 operates:1 corrected:1 lens:1 support:2 retinex:12 illuminant:8 investigator:1 dept:4 tested:1 phenomenon:2
3,268
3,960
On the Theory of Learning with Privileged Information Dmitry Pechyony NEC Laboratories Princeton, NJ 08540, USA [email protected] Vladimir Vapnik NEC Laboratories Princeton, NJ 08540, USA [email protected] Abstract In Learning Using Privileged Information (LUPI) paradigm, along with the standard training data in the decision space, a teacher supplies a learner with the privileged information in the correcting space. The goal of the learner is to find a classifier with a low generalization error in the decision space. We consider an empirical risk minimization algorithm, called Privileged ERM, that takes into account the privileged information in order to find a good function in the decision space. We outline the conditions on the correcting space that, if satisfied, allow Privileged ERM to have much faster learning rate in the decision space than the one of the regular empirical risk minimization. 1 Introduction In the classical supervised machine learning paradigm the learner is given a labeled training set of examples and her goal is to find a decision function with the small generalization error on the unknown test examples. If the learning problem is easy (e.g. if learner?s space of decision functions contains a one with zero generalization error) then, when the training size increases, the decision function found by the learner converges quickly to the optimal one. However if the learning problem is hard and the learner?s space of decision functions is large then the convergence (or learning) rate is slow. The example of such hard learning problem is XOR when the space of decision functions is 2-dimensional hyperplanes. The obvious question is ?Can we accelerate the learning rate if the learner is given an additional information about the learning problem??. During the last years several new paradigms of learning with additional information were proposed that, under some conditions, provably accelerate the learning rate. For example, in semi-supervised learning such additional information is unlabeled training examples. In this paper we consider a recently proposed Learning Using Privileged Information (LUPI) paradigm [8, 9, 10], that uses additional information of different kind. Let X be a decision space. In LUPI paradigm, in addition to the standard training data, (x, y) ? X ? Y , a teacher supplies the learner with a privileged information x? in the correcting space X ? . The privileged information is only available for the training examples and is never available for the test examples. The LUPI paradigm requires, given a training set {(xi , x?i , yi )}ni=1 , to find a decision function h : X ? Y with the small generalization error for the unknown test examples x ? X. The above question about accelerating the learning rate, reformulated in terms of the LUPI paradigm, is ?What kind of additional information should the teacher provide to the learner in order to accelerate her learning rate??. Paraphrased, this question is essentially ?Who is a good teacher??. In this paper we outline the conditions for the additional information provided by the teacher that allow for fast learning rate even in the hard problems. 1 LUPI paradigm emerges in a number of applications, for example time series prediction, protein classification and human computation. The experiments [9] in these domains demonstrated a clear advantage of LUPI paradigm over the supervised learning. LUPI paradigm can be implemented by SVM+ algorithm [8], which in turn is based on the wellknown SVM algorithm [2]. We now present the version of SVM+ for classification, the version for regression can be found in [9]. Let h(x) = sign(w ? x + b) be a decision function and ?(x?i ) = w? ? x?i + d be a correcting function. The optimization problem of SVM+ is n min? w,b,w ,d X 1 ? (w? ? x?i + d) kwk22 + kw? k22 + C 2 2 i=1 (1) s.t. ? 1 ? i ? n, yi (w ? xi + b) ? 1 ? (w? ? x?i + d) ? 1 ? i ? n, w? ? x?i + d ? 0. The objective function of SVM+ contains two hyperparameters, C > 0 and ? > 0. The term ?kw? k22 /2 in (1) is intended to restrict the capacity (or VC-dimension) of the function space containing ?. Let `X (h(x), y) = 1 ? y(w ? x + b) be a hinge loss of the decision function h = (w, b) on the example (x, y) and `X ? (?(x? )) = [w? ? x? + d]+ be a loss of the correcting function ? = (w? , d) on the example x? . The optimization problem (1) can be rewritten as n min h=(w,b),?=(w? ,d) X ? 1 kwk22 + kw? k22 + C `X ? (?(x?i )) 2 2 i=1 (2) s.t. ? 1 ? i ? n, `X (h(xi ), y) ? `X ? (?(x?i )). The following optimization problem is a simplified and a generalized version of (2): min h?H,??? s.t. ? 1 ? i ? n, n X `X ? (?(x?i ), yi ) (3) i=1 `X (h(xi ), yi ) ? `X ? (?(x?i ), yi ), (4) where `X and `X ? are arbitrary bounded loss functions, H is a space of decision functions and ? is a space of correcting functions. Let C > 0 be a constant (that is defined later), [t]+ = max(t, 0) and `0 ((h, ?), (x, x? , y)) = 1 ? `X ? (?(x? ), y) + [`X (h(x), y) ? `X ? (?(x? ), y)]+ C (5) be the loss of the composite hypothesis (h, ?) on the example (x, x? , y). In this paper we study the relaxation of (3): n X min `0 ((h, ?), (xi , x?i , yi )), (6) h?H,??? i=1 We refer to the learning algorithm defined by the optimization problem (6) as empirical risk minimization with privileged information, or abbreviated Privileged ERM. The basic assumption of Privileged ERM is that if we can achieve a small loss `X ? (?(x? ), y) in the correcting space then we should also achieve a small loss `X (h(x), y) in the decision space. This assumption reflects the human learning process, where the teacher tells the learner what are the most important examples (the ones with the small loss in the correcting space) that the learner should take into account in order to find a good decision rule. b The regular Pn empirical risk minimization (ERM) finds a hypothesis h ? H that minimizes the training error i=1 `X (h(xi ), yi ). While the regular ERM directly minimizes the training error of h, the privileged ERM minimizes the training error of h indirectly, via the minimization of the training error of the correcting function ? and the relaxation of the constraint (4). Let h? be the best possible decision function (in terms of generalization error) in the hypothesis space H. Suppose that for each training example xi an oracle gives us the value of the loss `X (h? (xi ), yi ). We use these fixed losses instead of `X ? (?(x?i ), yi ) and find h that satisfies the following system of inequalities: ? 1 ? i ? n, `X (h(xi ), yi ) ? `X (h? (xi ), yi ). (7) 2 We denote the learning algorithm defined by (7) as OracleERM. A straightforward generalization of the proof of Proposition 1 of [9] shows that the generalization error of the hypothesis b h found ? by OracleERM converges to the ? one of h with the rate of 1/n. This rate is much faster than the worst-case convergence rate 1/ n of the regular ERM [3]. In this paper we consider more realistic setting, when the above oracle is not available. Our subsequent derivations rely heavily on the following definition: Definition 1.1 A decision function h is uniformly better than the correcting function ? if for any example (x, x? , y) that has non-zero probability, `X ? (?(x?i ), yi ) ? `X (h(xi ), yi ). Given a space H of decision functions and a space ? of correcting functions we define ? = {? ? ? | ?h ? H that is uniformly better than ?}. Note that ? ? ? and ? does not contain correcting functions that are too good for H. Our results are based on the following two assumptions: Assumption 1.2 ? 6= ?. This assumption is not restrictive, since it only means that the optimization problem (3) of Privileged ERM has a feasible solution when the training size goes to infinity. Assumption 1.3 There exists a correcting function ? ? ?, such that for any (x, x? , y) that has non-zero probability, `X (h? (xi ), yi ) = `X ? (?(x?i ), yi ). Put it another way, we assume the existence of correcting function in ? that mimics the losses of h? . Let r be a learning rate of the Privileged ERM when it is ran over the joint X ? X ? space with the space of decision and correcting functions H ? ?. We develop an upper bound for the risk of the decision function found by Privileged ERM. Under the above assumptions this bound converges to h? with the same rate r. This implies that if the correcting space is good, so that the Privileged ERM in the joint X ? X ? space has a fast learning rate (e.g 1/n), then the Privileged ERM will have the same fast learning rate (e.g. the same 1/n) in the decision space. That is true even if the?decision space is hard and the regular ERM in the decision space has a slow learning rate (e.g. 1/ n). We illustrate this result with the artificial learning ? problem, where the regular ERM in the decision space can not learn with the rate faster than 1/ n, but the correcting space is good and Privileged ERM learns in the decision space with the rate of 1/n. The paper has the following structure. In Section 2 we give additional definitions. In Section 3 we review the existing risk bounds that are used to derive our results. Section 4 contains the proof of the risk bound for Privileged ERM. In Section 5 we show an example when Privileged ERM is provably better than the regular ERM. We conclude and give the directions for future research in Section 6. Due to the space constraints, most of the proofs appear in the supplementary material. Previous work The first attempt of theoretical analysis of LUPI was done by Vapnik and Vashist [9]. In addition to the analysis of learning with oracle (mentioned above), they considered the algorithm, which is close, but different from Privileged ERM. They developed a risk bound (Proposition 2 in [9]) for the decision function found by their algorithm. This bound also applies to Privileged ERM. The bound of [9] is tailored to the classification setting, with 0/1-loss functions in the decision and the correcting space. By contrast, our bound holds for any bounded loss functions and allows the loss functions `X and `X ? to be different. The bound of [9] depends on generalization error of the correcting function ?b found by Privileged ERM. Vapnik and Vashist [9] concluded that if we could bound the convergence rate of ?b then this bound will imply the bound on the convergence rate of the decision function found by their algorithm. 2 Definitions The triple (x, x? , y) is sampled from the distribution D, which is unknown to the learner. We denote by DX the marginal distribution over (x, y) and by DX ? the marginal distribution over (x? , y). The distribution DX is given by the nature and the distribution DX ? is constructed by the teacher. The spaces H and ? of decision and correcting functions are chosen by learner. 3 Let R(h) = E(x,y)?DX {`X (h(x), y)} and R(?) = E(x? ,y)?DX ? {`X ? (?(x? ), y)} be the generalization errors of the decision function h and the correcting function ? respectively. We assume that the loss functions `X and `X ? have range [0, 1]. This assumption can be satisfied by any bounded loss function by simply dividing it by its maximal value. We denote by h? = arg minh?H R(h) and ?? = arg min??? R(?) the decision and the correction function with the minimal generalization error w.r.t. the loss functions `X and `X ? . Also, we denote by `01 the 0/1 loss, by R01 (h) = E(x,y)?DX {`01 (h(x), y)} the generalization error of h w.r.t. the 0/1 loss and by h?01 = arg minh?HPR01 (h) the decision function in H with the minimal generalization 0/1 error. n Let Rn0 (h, ?) = n1 i=1 `0 ((h, ?), (xi , x?i , yi )) and R0 (h, ?) = E(x,x? ,y) ?D {`0 ((h, ?), (x, x? , y))} (8) be respectively empirical and generalization errors of the hypothesis (h, ?) w.r.t. the loss function b = arg min(h,?)?H?? R0 (h, ?) the empirical risk minimizer and by `0 . We denote by (b h, ?) n (h0 , ?0 ) = arg min (h,?)?H?? R0 (h, ?) the minimizer of the generalization error w.r.t. the loss function `0 . Note that in general h? can be different from h0 , and also ?0 can be different from ?? . Let (H, ?) = {(h, ?) ? H ? ? | h is uniformly better than ?}. By Assumption 1.2, (H, ?) 6= ?. We will use additional technical assumption: Assumption 2.1 There exists a constant A > 0 such that ? ? inf E(x,x? ,y)?D {[`X (h(x), y) ? `X ? (?(x? ), y)]+ } | (h, ?) ? / (H, ?), R(?) < R(?) ? A. This assumption is satisfied, for example, in the classification setting when `X and `X ? are 0/1 loss functions and the probability density function p(x, x? , y) of the underlying distribution D is bounded away from zero for all points with nonzero probability. In this case A ? inf{p(x, x? , y) | (x, x? , y) such that p(x, x? , y) 6= 0}. The following lemma (proved in Appendix A in the full version of the paper) shows that for sufficiently large C the optimization problems (3) and (6) are asymptotically (when n ? ?) equivalent: Lemma 2.2 Suppose that Assumptions 1.2, 1.3 and 2.1 hold true. Then there exists a finite C1 ? R such that for any C ? C1 , (h0 , ?0 ) ? (H, ?). Moreover, h0 = h? and ?0 = ?. In all our subsequent derivations we assume that C has a finite value for which (3) and (6) are equivalent. Later on we will show how we choose the value of C that optimizes the forthcoming risk bound. The risk bounds presented in this paper are based on VC-dimension of various function classes. While the definition of VC-dimension for binary functions is well-known in the learning community, the one for the real-valued functions is less known and we review it here. Let F be a set of realvalued functions f : S ? R and T (F) = {(x, t) ? S ? R | ? f ? F s.t. 0 ? |f (x)| ? t}. We say |T | that the set T = {(xi , ti )}i=1 ? T (F) is shattered by F if for any T 0 ? T there exists a function f ? F such that for any (xi , ti ) ? T 0 , |f (xi )| ? ti and for any (xi , ti ) ? T \ T 0 , |f (xi )| > ti . The VC-dimension of F is defined as a VC-dimension of the set T (F), namely the maximal size of the set T ? T (F) that is shattered by F. 3 Review of existing excess risk bounds with fast convergence rates We derive our risk bounds from generic excess risk bounds developed by Massart and Nedelec [6] and generalized by Gine and Koltchinskii [4] and Koltchinkii [5]. In this paper we use the version of the bounds given in [4] and [5]. Let F be a space of hypotheses f : S ? S 0 , ` : S 0 ? {?1, +1} ? R be a real-valued loss function such that 0 ? `(f (x), y) ? 1 for any f ? F and any (x, y). Let f ? = 4 (a) Hypothesis space with small D (b) Hypothesis space with large D Figure 1: Visualization of the hypothesis spaces. The horisontal axis measures the distance (in terms of the variance) between hypothesis f and the best hypothesis f ? in F. The vertical axis is the minimal error of hypotheses in F with the fixed distance from f ? . Note that the error function displayed in graphs can be non-continuous. The large value of D in the hypothesis space in graph (b) is caused by hypothesis A, which is significantly different from f ? but has nearly-optimal error. Pn arg minf ?F E(x,y) {`(f (x), y)}, fbn = arg minf ?F i=1 `(f (xi ), yi ) and D > 0 be a constant such that for any f ? F, Var(x,y) {`(f (x), y) ? `(f ? (x), y)} ? D ? E(x,y) {`(f (x), y) ? `(f ? (x), y)}. (9) This condition is a generalization of Tsybakov?s low-noise condition [7] to arbitrary loss functions and arbitrary hypothesis spaces. The constant D in (9) characterizes the error surface of the hypothesis space F. Suppose that E(x,y) {`(f (x), y) ? `(f ? (x), y)} is very small, namely f is nearly optimal. If f is almost the same as f ? then the variance in the left hand side of (9), as well as the value of D, will be small. But if f differs significantly from f ? then the variance in the left hand side of (9), as well as the value of D, will be large. Thus, if we take the variance in the left hand side of (9) as a measure of distance between f and f ? then the hypothesis spaces with large and small D can be visualized as shown in Figure 1. Let V be a VC-dimension of F. The following theorem is a straightforward generalization of Theorem 5.8 in [5]. Theorem 3.1 ([5]) There exists a constant K > 0 such that if n > V ? D2 then for any ? > 0, with probability of at least 1 ? ? ? ? KD n 1 ? b E(x,y) {`(f (x), y)} ? E(x,y) {`(f (x), y)} + V log + ln . (10) n V D2 ? Let B = (V log n + log(1/?))/n. If the condition of Theorem 3.1 does not hold, namely if n ? V ? D2 then we can use the following fallback risk bound: Theorem 3.2 ([1, 8]) There exists a constant K 0 such that for any ? > 0, with probability of at least 1 ? ?, ?q ? E(x,y) {`(fb(x), y)} ? E(x,y) {`(f ? (x), y)} + K 0 E(x,y) {`(f ? (x), y)}B + B . (11) Definition 3.3 Let T = T (E(x,y) {`(f ? (x), y)}, V, ?) be a constant such that for all n < T it holds that E(x,y) {`(f ? (x), y)} < B. For n ? T the bound?(11) has a convergence rate of 1/n, and for n > T the bound (11) has a convergence rate of 1/ n. The ? main difference between (10) and (11) is the fast convergence rate of 1/n vs. the slow one of 1/ n in the regime of n > max(T, V ? D2 ). By Theorem 3.1, starting from n > n(D) = V ? D2 we always have the convergence rate of 1/n. Thus, the smaller value of D, the smaller will be the threshold n(D) for obtaining the fast convergence rate of 1/n. 4 Upper Risk Bound For any C ? 1, any (x, x? , y), any h ? H and ? ? ?, and any loss functions `X and `X ? , `X (h(x), y) ? `X ? (?(x? ), y) + C [`X (h(x), y) ? `X ? (?(x? ), y)]+ . 5 Hence, using (5) we obtain that n o b (x, x? , y)) = C ? R0 (b b R(b h) = E(x,y) {`X (b h(x), y)} ? C ? E(x? ,y) `0 ((b h, ?), h, ?). (12) Let `1 (h, h? , x, y) = `X (h(x), y) ? `X (h? (x), y) and DH ? 0 be a constant such that for any h?H DH ? E(x,y) {`1 (h, h? , x, y)} ? Var(x,y) {`1 (h, h? , x, y)} . (13) Similarly, let `2 (h, h0 , ?, ?0 , x, x? , y) = `0 ((h, ?), (x, x? , y))?`0 ((h0 , ?0 ), (x, x? , y)) and DH,? ? 0 be a constant such that for all (h, ?) ? H ? ?, DH,? ? E(x,x? ,y) {`2 (h, h0 , ?, ?0 , x, x? , y)} ? Var(x,x? ,y) {`2 (h, h0 , ?, ?0 , x, x? , y)} . (14) Let L(H, ?) = {`0 ((h, ?), (?, ?, ?)) | h ? H, ? ? ?} be a set of the loss functions `0 corresponding to hypotheses from H ? ? and VL(H,?) be a VC-dimension of L(H, ?). Similarly, let L(H) = {`X (h(?), ?) | h ? H} and L(?) = {`X ? (?(?), ?) | ? ? ?} be the sets of loss functions that correspond to the hypotheses in H and ?, and VL(H) and VL(?) be VC dimensions of L(H) and L(?) respectively. Note that if `X = `01 then VL(H) is also a VC-dimension of H (the same holds also for VL(?) ). Lemma 4.1 VL(H,?) = VL(H) + VL(?) . Proof See Appendix C in the full version of the paper. We apply Theorem 3.1 to the hypothesis space H ? ? and the loss function `0 ((h, ?), (x, x? , y)) and 2 obtain that there exists a constant K > 0 such that if n > VL(H,?) ? DH,? then for any ? > 0, with probability at least 1 ? ? ? ! n KDH,? 1 0 b b 0 0 0 VL(H,?) ln . R (h, ?) ? R (h , ? ) + + ln 2 n VL(H,?) DH,? ? Using (12) we obtain that CKDH,? R(b h) ? C ? R0 (h0 , ?0 ) + n ? VL(H,?) ln n 2 VL(H,?) DH,? 1 + ln ? ! . (15) It follows from Assumption 1.3 and Lemma 2.2 that R0 (h0 , ?0 ) = 1 1 1 R(?0 ) = R(?) = R(h? ). C C C (16) We substitute (16) into (15) and obtain that there exists a constant K > 0 such that if n > VL(H,?) ? 2 DH,? then for any ? > 0, with probability at least 1 ? ?, ? ! CKD n 1 H,? R(b h) ? R(h? ) + VL(H,?) ln . + ln 2 n VL(H,?) DH,? ? We bound VH,? by Lemma 4.1 and obtain our final risk bound, that is summarized in the following theorem: Theorem 4.2 Suppose that Assumptions 1.2, 1.3 and 2.1 hold. Let DH,? be as defined in (14), C1 be as defined in Lemma 2.2, and V L(H,?) = VL(H) + VL(?) . Suppose that C > C1 and 2 . Then for any ? > 0 with probability of at least 1 ? ?, n > V L(H,?) ? DH,? ? ! CKDH,? n 1 ? b R(h) ? R(h ) + , (17) V L(H,?) ln + ln 2 n ? V L(H,?) ? DH,? where K > 0 is a constant. 6 According to this bound, R(b h) converges to R(h? ) with the rate of 1/n. If Assumption 1.3 does not hold then it is easy to see that we obtain the same bound as (17), but with R(h? ) replaced by R(?0 ). In this case the upper bound on R(b h) converges to R(?0 ) with the rate of 1/n. We now provide further analysis of the risk bound (17). Let `3 (?, ?0 , x? , y) = `X ? (?(x? ), y) ? `X ? (?0 (x? ), y) and D? ? 0 be a constant such that for any ? ? ?, D? ? E(x? ,y) {`3 (?, ?0 , x? , y)} ? Var(x? ,y) {`3 (?, ?0 , x? , y)} . (18) 0 Similarly, let DH,? ? 0 be a constant such that for all (h, ?) ? (H ? ?) \ (H, ?), 0 DH,? E(x,x? ,y) {`2 (h, h0 , ?, ?0 , x, x? , y)} ? Var(x,x? ,y) {`2 (h, h0 , ?, ?0 , x, x? , y)} . ? ? 0 Lemma 4.3 DH,? ? max D? /C, DH,? . Proof See Appendix B in the full version of the paper. 0 ). Since the loss function `2 depends on C, the By Lemma 4.3, C ? DH,? ? max(D? , C ? DH,? 0 constant DH,? depends on C too. Thus, ingoring the left-hand logarithmic term in (17), the optimal 0 value of C is the one that is larger that C1 and minimizes C ? DH,? . We now show that such minimum indeed exists. By the definition of the loss function `2 , ? ? Var(x,x? ,y) {`2 (h, h0 , ?, ?0 , x, x? , y)} 0 < lim sup ? 1. (19) C?? (h,?)?(H??)\(H,?) E(x,x? ,y) {`2 (h, h0 , ?, ?0 , x, x? , y)} 0 Therefore for very large C it holds that 0 < s ? DH,? ? 1, where s is the value of the above limit. 0 0 Consequently limC?? C ? DH,? = ?. Since the function g(C) = C ? DH,? is continuous and ? finite in C = C1 , there exists a point C = C ? [C1 , ?) that minimizes it. 5 When Privileged ERM is provably better than the regular ERM We show an example that demonstrates the difference between the emprical risk minimization in X space and empirical risk minimization with privileged information in the joint X ? X ? space. In particular, we show in this example that for not too small training sizes (as specified ? by the conditions of Theorems 11 and 4.2) the learning rate of the regular ERM in X space is 1/ n while the learning rate of the privileged ERM in the joint X ? X ? space is 1/n. We consider the classification setting and all loss functions in our example are 0/1 loss. Let DX = {DX (?)|0 < ? < 0.1} be an infinite family of distributions of examples in X space. All distributions in DX have non-zero support in four points, denoted by X1 , X2 , X3 and X4 . We assume that these points lie on a 1-dimensional line, as shown in Figure 2(a). Figure 2(a) also shows the probability mass of each point in the distribution DX (?). The hypothesis space H consists of hypotheses ht (x) = sign(x ? t) and h0t = ?sign(x ? t). The best hypothesis in H is h01 and its generalization error is 1/4 ? 2?. The hypothesis space H contains also a hypothesis h03 , which is slightly worse than h01 and has generalization error of 1/4 + ?. It can be verified that for a fixed DX (?) and H the constant DH (defined in equation (13)) is DH = 1/(6?) ? (1/3) ? ? ? 1/(6?). (20) Note that the inequality in (20) is very tight since ? can be arbitrary small. The VC-dimension VH 2 of H is 2. Suppose that ? is sufficiently small such that VH ? DH > T (1/4 ? 2?, VH , ?), where the function T (?, ?, ?) is defined in Definition 3.3. In order to use the risk bound (10) with our DX and H, the condition 2 n > V H ? DH = 1/(18?2 ) (21) should be satisfied. But since ? can be very small, the condition (21) is not satisfied for a large range 1 of n?s. Hence, according to (11), for distributions DX (?) that satisfy T (1/4 ? 2?, 2, ?) ? 18? 2 we ? ? b obtain that R01 (h) converges to R01 (h ) with the rate of at least 1/ n. ? The following lower bound shows that R01 (b h) converges to R01 (h? ) with the rate of at most 1/ n. 7 (b) X ? space (a) X space ? Figure 2: X and X spaces. Lemma 5.1 Suppose that ? < 1/16. Let ?n = exp(?20n?2 ). Then for any n > 256, with probability at least ?n , p R01 (b h) ? R01 (h? ) ? ln(1/?n )/(20n). ? b By combining ? upper and lower bounds we obtain that the convergence rate of R01 (h) to R01 (h ) is exactly 1/ n. The proof of the lower bound appears in Appendix D in the full version of the paper. Suppose that the teacher constructed the distribution DX ? (?) of examples in X ? space in the following way. DX ? (?) has non-zero support in four points, denoted by X1? , X2? , X3? and X4? , that lie on a 1-dimensional line, as shown in Figure 2(b). Figure 2(b) shows the probability mass of each point in X ? space. We assume that the joint distribution (X, X ? ) has non-zero support only on points (X1 , X1? ), (X2 , X2? ), (X3 , X3? ) and (X4 , X4? ). The hypothesis space ? consists of hypotheses ?t (x) = sign(x? ? t) and ?0t = ?sign(x? ? t). The best hypothesis in ? is ?02 and its generalization error is 0. However there is no h ? H that is uniformly better than ?02 . The best hypothesis in ?, among those that have uniformly better hypothesis in H, is ?01 and its generalization error is 1/4 ? 2?. h01 is uniformly better than ?01 . It can be verified that for such DX ? (?) and ? the constant D? (defined in equation (18)) is D? = (11/16 ? 3? ? 4?2 )/(1/4 + 2?) ? 2.75. (22) Note that the inequality in (22) is very tight since ? can be arbitrary small. Moreover, it can be 0 0 verified that C that minimizes C ? DH,? is C ? = 2.6. For C = C ? it holds that DH,? = 1.71 and D? /C = 1.06. It is easy to see that our example satisfies Assumptions 1.2 and 1.3 (the last assumption is satisfied with ? = ??01 ). Also, it can be verified that Assumption 2.1 is satisfied with A = 1/4 ? 2? and C1 = 1.1 < C ? satisfies Lemma 2.2. The VC-dimension of ? is 2. Hence by Theorem 4.2 and Lemma 4.3, if n > (2 + 2) ? 1.712 = 11.7 then R01 (b h) converges to R01 (h? ) with 0 the rate of at least 1/n. Since our bounds on D? and DH,? are independent of ?, the convergence rate of 1/n holds for any distribution in DX . 1 ? We obtained that for 11.7 < n ? 18? 2 the upper bound (17) converges to R01 (h ) with the rate of ? ? 1/n, while the upper bound (11) converges to R01 (h ) with the rate of 1/ n. This improvement was possible due to teacher?s construction of DX ? (?) and learner?s choice of ?. The hypothesis h03 caused the value of DH to be large and thus prevented us from 1/n convergence rate for a large range of n?s. We constructed DX ? (?) and ? in such a way that ? does not have a hypothesis ? that has exactly the same dichotomy as the bad hypothesis h03 . With such construction any ? ? ?, such that h03 is uniformly better than ?, has generalization error significantly larger than the one of h03 . For example, the best hypothesis in ? for which h03 is uniformly better, is ?0 and its generalization error is 1/2. 6 Conclusions We formulated the algorithm of empirical risk minimization with privileged information and derived the risk bound for it. Our risk bound outlines the conditions for the correcting space that, if satisfied, will allow fast learning in the decision space, even if the original learning problem in the decision space is very hard. We showed an example where the privileged information provably significantly improves the learning rate. ? In this paper we showed that the good correcting space can improve the learning rate from 1/ n to 1/n. But, having the good correcting space, can we achieve a learning rate faster than 1/n? Another intersting problem is to analyze Privileged ERM when the learner does not completely trust the teacher. This condition translates to the constraint `X (h(x), y) ? `X ? (?(x? ), y) + ? in (3) and the term [`X (h(x), y) ? `X ? (?(x? ), y)]+ in (6), where ? ? 0 is a hyperparameter. Finally, the important direction is to develop risk bounds for SVM+ (which is a regularized version of Privileged ERM) and show when it is provably better than SVM. 8 References [1] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: a survey of some recent advances. ESAIM: Probability and Statistics, 9:329?375, 2005. [2] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273?297, 1995. [3] L. Devroye and G. Lugosi. Lower bounds in pattern recognition and learning. Pattern Recognition, 28(7):1011?1018, 1995. [4] E. Gine and V. Koltchinskii. Concentration inequalities and asymptotic resutls for ratio type empirical processes. Annals of Probability, 34(3):1143?1216, 2006. [5] V. Koltchinskii. 2008 Saint Flour lectures: Oracle inequalities in empirical risk minimization and sparse recovery problems, 2008. Available at fodava.gatech.edu/files/reports/FODAVA09-17.pdf. [6] P. Massart and E. Nedelec. Risk bounds for statistical learning. Annals of Statistics, 34(5):2326?2366, 2006. [7] A. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32(1):135?166, 2004. [8] V. Vapnik. Estimation of dependencies based on empirical data. Springer?Verlag, 2nd edition, 2006. [9] V. Vapnik and A. Vashist. A new learning paradigm: Learning using privileged information. Neural Networks, 22(5-6):544?557, 2009. [10] V. Vapnik, A. Vashist, and N. Pavlovich. Learning using hidden information: Master class learning. In Proceedings of NATO workshop on Mining Massive Data Sets for Security, pages 3?14. 2008. 9
3960 |@word version:9 nd:1 ckd:1 d2:5 contains:4 series:1 existing:2 com:2 dx:20 subsequent:2 realistic:1 v:1 hyperplanes:1 along:1 constructed:3 supply:2 consists:2 indeed:1 provided:1 bounded:4 underlying:1 moreover:2 rn0:1 mass:2 what:2 kind:2 minimizes:6 developed:2 nj:2 ti:5 exactly:2 classifier:2 demonstrates:1 appear:1 limit:1 lugosi:2 koltchinskii:3 range:3 differs:1 x3:4 empirical:11 significantly:4 composite:1 regular:9 protein:1 unlabeled:1 close:1 put:1 risk:27 equivalent:2 demonstrated:1 straightforward:2 go:1 starting:1 survey:1 recovery:1 correcting:24 rule:1 limc:1 annals:3 construction:2 suppose:8 heavily:1 massive:1 us:1 hypothesis:34 recognition:2 h0t:1 labeled:1 worst:1 ran:1 mentioned:1 tight:2 learner:15 completely:1 accelerate:3 joint:5 various:1 derivation:2 fast:7 artificial:1 dichotomy:1 tell:1 h0:14 supplementary:1 valued:2 larger:2 say:1 statistic:3 final:1 advantage:1 maximal:2 combining:1 achieve:3 convergence:13 converges:10 illustrate:1 develop:2 derive:2 paraphrased:1 implemented:1 dividing:1 implies:1 h01:3 direction:2 vc:11 human:2 material:1 generalization:22 proposition:2 correction:1 hold:10 sufficiently:2 considered:1 exp:1 estimation:1 reflects:1 minimization:9 always:1 pn:2 gatech:1 derived:1 improvement:1 contrast:1 shattered:2 vl:18 her:2 hidden:1 provably:5 arg:7 classification:6 among:1 denoted:2 marginal:2 never:1 having:1 x4:4 kw:3 nearly:2 minf:2 mimic:1 future:1 report:1 replaced:1 intended:1 n1:1 attempt:1 mining:1 flour:1 theoretical:1 minimal:3 too:3 dependency:1 teacher:10 density:1 quickly:1 fbn:1 satisfied:8 containing:1 choose:1 worse:1 account:2 summarized:1 satisfy:1 caused:2 depends:3 later:2 lab:2 analyze:1 characterizes:1 sup:1 aggregation:1 ni:1 xor:1 variance:4 who:1 correspond:1 vashist:4 pechyony:2 definition:8 obvious:1 proof:6 sampled:1 proved:1 vlad:1 lim:1 emerges:1 improves:1 appears:1 supervised:3 nedelec:2 done:1 hand:4 trust:1 fallback:1 usa:2 k22:3 contain:1 true:2 hence:3 boucheron:1 laboratory:2 nonzero:1 during:1 generalized:2 pdf:1 outline:3 recently:1 refer:1 similarly:3 surface:1 showed:2 recent:1 inf:2 optimizes:1 wellknown:1 verlag:1 inequality:5 binary:1 yi:17 minimum:1 additional:8 r0:6 paradigm:11 semi:1 full:4 technical:1 faster:4 prevented:1 privileged:32 prediction:1 regression:1 basic:1 essentially:1 gine:2 tailored:1 c1:8 addition:2 concluded:1 kdh:1 massart:2 file:1 kwk22:2 easy:3 forthcoming:1 restrict:1 translates:1 accelerating:1 reformulated:1 clear:1 tsybakov:2 visualized:1 sign:5 hyperparameter:1 four:2 threshold:1 verified:4 ht:1 asymptotically:1 relaxation:2 graph:2 year:1 master:1 almost:1 family:1 decision:35 appendix:4 bound:40 lupi:9 oracle:4 constraint:3 infinity:1 x2:4 bousquet:1 min:7 according:2 kd:1 smaller:2 slightly:1 erm:28 ln:10 equation:2 visualization:1 turn:1 abbreviated:1 available:4 rewritten:1 apply:1 away:1 indirectly:1 generic:1 existence:1 substitute:1 original:1 saint:1 hinge:1 restrictive:1 classical:1 r01:13 objective:1 question:3 concentration:1 distance:3 capacity:1 devroye:1 ratio:1 vladimir:1 unknown:3 upper:6 vertical:1 minh:2 finite:3 displayed:1 arbitrary:5 community:1 emprical:1 namely:3 specified:1 security:1 pattern:2 regime:1 max:4 rely:1 regularized:1 improve:1 esaim:1 imply:1 realvalued:1 axis:2 vh:4 review:3 asymptotic:1 loss:31 lecture:1 var:6 triple:1 last:2 side:3 allow:3 sparse:1 dimension:11 fb:1 simplified:1 excess:2 nato:1 dmitry:1 conclude:1 xi:19 continuous:2 learn:1 nature:1 obtaining:1 domain:1 main:1 noise:1 hyperparameters:1 edition:1 x1:4 slow:3 lie:2 learns:1 theorem:11 bad:1 svm:7 cortes:1 exists:10 workshop:1 vapnik:7 nec:4 logarithmic:1 simply:1 applies:1 springer:1 minimizer:2 satisfies:3 dh:31 goal:2 formulated:1 consequently:1 feasible:1 hard:5 infinite:1 uniformly:8 lemma:11 called:1 support:4 princeton:2
3,269
3,961
Causal discovery in multiple models from different experiments Tom Heskes Radboud University Nijmegen The Netherlands [email protected] Tom Claassen Radboud University Nijmegen The Netherlands [email protected] Abstract A long-standing open research problem is how to use information from different experiments, including background knowledge, to infer causal relations. Recent developments have shown ways to use multiple data sets, provided they originate from identical experiments. We present the MCI-algorithm as the first method that can infer provably valid causal relations in the large sample limit from different experiments. It is fast, reliable and produces very clear and easily interpretable output. It is based on a result that shows that constraint-based causal discovery is decomposable into a candidate pair identification and subsequent elimination step that can be applied separately from different models. We test the algorithm on a variety of synthetic input model sets to assess its behavior and the quality of the output. The method shows promising signs that it can be adapted to suit causal discovery in real-world application areas as well, including large databases. 1 Introduction Discovering causal relations from observational data is an important, ubiquitous problem in science. In many application areas there is a multitude of data from different but related experiments. Often the set of measured variables is not the same between trials, or the circumstances under which they were conducted differed, making it difficult to compare and evaluate results, especially when they seem to contradict each other, e.g. when a certain dependency is observed in one experiment, but not in another. Results obtained from one data set are often used to either corroborate or challenge results from another. Yet how to reconcile information from multiple sources, including background knowledge, into a single, more informative model is still an open problem. Constraint-based methods like the FCI-algorithm [1] are provably correct in the large sample limit, as are Bayesian methods like the greedy search algorithm GES [2] (with additional post-processing steps to handle hidden confounders). Both are defined in terms of modeling a single data set and have no principled means to relate to results from other sources in the process. Recent developments, like the ION-algorithm by Tillman et al. [3], have shown that it is possible to integrate multiple, partially overlapping data sets. However, such algorithms are still essentially single model learners in the sense that they assume there is one, single encapsulating structure that accounts for all observed dependencies in the different models. In practice, observed dependencies often differ between data sets, precisely because the experimental circumstances were not identical in different experiments, even when the causal system at the heart of it was the same. The method we develop in this article shows how to distinguish between causal dependencies internal to the system under investigation and merely contextual dependencies. Mani et al. [4] recognized the ?local? aspect of causal discovery from Y-structures embedded in data: it suffices to establish a certain (in)dependency pattern between variables, without having to uncover the entire graph. In section 4 we take this one step further by showing that such causal 1 discovery can be decomposed into two separate steps: a conditional independency to identify a pair of possible causal relations (one of which is true), and then a conditional dependency that eliminates one option, leaving the other. The two steps rely only on local (marginal) aspects of the distribution. As a result the conclusion remains valid, even when, unlike causal inference from Y-structures, the two pieces of information are taken from different models. This forms the basis underpinning the MCI-algorithm in section 6. Section 2 of this article introduces some basic terminology. Section 3 models different experiments. Section 4 establishes the link between conditional independence and local causal relations, which is used in section 5 to combine multiple models into a single causal graph. Section 6 describes a practical implementation in the form of the MCI-algorithm. Sections 7 and 8 discuss experimental results and suggest possible extensions to other application areas. 2 Graphical model preliminaries First a few familiar notions from graphical model theory used throughout the article. A directed graph G is a pair hV, Ei, where V is a set of vertices or nodes and E is a set of edges between pairs of nodes, represented by arrows X ? Y . A path ? = hV0 , . . . , Vn i between V0 and Vn in G is a sequence of distinct vertices such that for 0 ? i ? n ? 1, Vi and Vi+1 are connected by an edge in G. A directed path is a path that is traversed entirely in the direction of the arrows. A directed acyclic graph (DAG) is a directed graph that does not contain a directed path from any node to itself. A vertex X is an ancestor of Y (and Y is a descendant of X) if there is a directed path from X to Y in G or if X = Y . A vertex Z is a collider on a path ? = h. . . , X, Z, Y, . . .i if it contains the subpath X ? Z ? Y , otherwise it is a noncollider. A trek is a path that does not contain any collider. For disjoint (sets of) vertices X, Y and Z in a DAG G, X is d-connected to Y conditional on Z (possibly empty), iff there exists an unblocked path ? = hX, . . . , Y i between X and Y given Z, i.e. such that every collider on ? is an ancestor of some Z ? Z and every noncollider on ? is not in Z. If not, then all such paths are blocked, and X is said to be d-separated from Y ; see [5, 1] for details. Definition 1. Two nodes X and Y are minimally conditionally independent given a set of nodes Z, denoted [X ? ? Y | Z], iff X is conditionally independent of Y given a minimal set of nodes Z. Here minimal, indicated by the square brackets, implies that the relation does not hold for any proper subset Z0 ( Z of the (possibly empty) set Z. A causal DAG GC is a graphical model in the form of a DAG where the arrows represent direct causal interactions between variables in a system [6]. There is a causal relation X ? Y , iff there is a directed path from X to Y in GC . Absence of such a path is denoted X  ? Y . The causal Markov condition links the structure of a causal graph to its probabilistic concomitant, [5]: two variables X and Y in a causal DAG GC are dependent given a set of nodes Z, iff they are connected by a path ? in GC that is unblocked given Z; so there is a dependence X ?  ? Y iff there is a trek between X and Y in the causal DAG. We assume that the systems we consider correspond to some underlying causal DAG over a great many observed and unobserved nodes. The distribution over the subset of observed variables can then be represented by a (maximal) ancestral graph (MAG) [7]. Different MAGs can represent the same distribution, but only the invariant features, common to all MAGs that can faithfully represent that distribution, carry identifiable causal information. The complete partial ancestral graph (CPAG) P that represents the equivalence class [G] of a MAG G is a graph with either a tail ???, arrowhead ?>? or circle mark ??? at each end of an edge, such that P has the same adjacencies as G, and there is a tail or arrowhead on an edge in P iff it is invariant in [G], otherwise it has a circle mark [8]. The CPAG of a given MAG is unique and maximally informative for [G]. We use CPAGs as a concise and intuitive graphical representation of all conditional (in)dependence relations between nodes in an observed distribution; see [7, 8] for more information on how to read independencies directly from a MAG/CPAG using the m-separation criterion, which is essentially just the d-separation criterion, only applied to MAGs. Throughout this article we also adopt the causal faithfulness assumption, which implies that all and only the conditional independence relations entailed by the causal Markov condition applied to the true causal DAG will hold in the joint probability distribution over the variables in GC . For an in-depth discussion of the justification and connection between these assumptions, see [9]. 2 3 Modeling the system Random variation in a system corresponds to the impact of unknown external variables, see [5]. Some of these external factors may be actively controlled, e.g. in clinical trials, or passively observed as the natural embedding of a system in its environment. We refer to both observational and controlled studies as experiments. External factors that affect two or more variables in a system simultaneously, can lead to dependencies that are not part of the system. Different external factors may bring about observed dependencies that differ between models, seemingly contradicting each other. By modeling this external environment explicitly as a set of unobserved (hypothetical) context nodes that causally affect the system under scrutiny we can account for this effect. Definition 2. The external context GE of a causal DAG GC is a set of independent nodes U in combination with links from every U ? U to one or more nodes in GC . The total causal structure of an experiment then becomes GT = {GE + GC }. Figure 1 depicts a causal system in three different experiments (double lined arrows indicate direct causal relations; dashed circles represent unobserved variables). The second and third experiment will result in an observed dependency between variables A and B, whereas the first one will not. The context only introduces arrows from nodes in GE to GC which can never result in a cycle, therefore the structure of an experiment GT is also a causal DAG. Note that differences in dependencies can only arise from different structures of the external context. Figure 1: A causal system GC in different experiments In this paradigm different experiments become variations in context of a constant causal system. The goal of causal discovery from multiple models can then be stated as: ?Given experiments with 0 unknown total causal structures GT = {GE + GC }, GT0 = {GE + GC }, etc., and known joint 0 0 0 probability distributions P (V ? GT ), P (V ? GT ), etc., which variables are connected by a directed path in GC ??. We assume that the large sample limit distributions P (V) are known and can be used to obtain categorical statements about probabilistic (in)dependencies between sets of nodes. Finally, we will assume there is no selection bias, see [10], nor blocking interventions on GC , as accounting for the impact would unnecessarily complicate the exposition. 4 Causal relations in arbitrary context A remarkable result that, to the best of our knowledge, has not been noted before, is that a minimal conditional independence always implies the presence of a causal relation. (See appendix for an outline of all proofs in this article.) Theorem 1. Let X, Y , Z and W be four disjoint (sets of) nodes (possibly empty) in an experiment with causal structure GT = {GE + GC }, then the following rules apply, for arbitrary GE (1) a minimal conditional independence [X ?? Y | Z] implies causal links Z ? X and/or Z ? Y from every Z ? Z to X and/or Y in GC , (2) a conditional dependence X ?  ? Y | Z ? W induced by a node W , i.e. with X ?? Y | Z, implies that there are no causal links W  ? X, W  ? Y or W  ? Z for any Z ? Z in GC , (3) a conditional independence X ? ? Y | Z implies the absence of (direct) causal paths X ? Y or X ? Y in GC between X and Y that are not mediated by nodes in Z. 3 The theorem establishes independence patterns that signify (absence of) a causal origin, independent of the (unobserved) external background. Rule (1) identifies a candidate pair of causal relations from a conditional independence. Rule (2) identifies the absence of causal paths from unshielded colliders in G, see also [1]. Rule (3) eliminates direct causal links between variables. The final step towards causal discovery from multiple models now takes a surprisingly simple form: Lemma 1. Let X, Y and Z ? Z be disjoint (sets of) variables in an experiment with causal structure GT = {GE + GC }, then if there exists both: ? a minimal conditional independence [X ?? Y | Z], ? established absence of a causal path Z  ? X, then there is a causal link (directed path) Z ? Y in GC . The crucial observation is that these two pieces of information can be obtained from different models. In fact, the origin of the information Z  ? X is irrelevant: be it from (in)dependencies via rule (2), other properties of the distribution, e.g. non-Gaussianity [11] or nonlinear features [12], or existing background knowledge. The only prerequisite for bringing results from various sources together is that the causal system at the centre is invariant, i.e. that the causal structure GC remains the same across the different experiments GT , GT0 etc. This result also shows why the well-known Y-structure: 4 nodes with X ? Z ? W and Z ? Y , see [4], always enables identification of the causal link Z ? Y : it is simply lemma 1 applied to overlapping nodes in a single model, in the form of rule (1) for [X ?? Y | Z], together with dependency X ?  ? W | Z created by Z to eliminate Z  ? X by rule (2). 5 Multiple models In this article we focus on combining multiple conditional independence models represented by CPAGs. We want to use these models to convey as much about the underlying causal structure GC as possible. We choose a causal CPAG as the target output model: similar in form and interpretation to a CPAG, where tails and arrowheads now represent all known (non)causal relations. This is not necessarily an equivalence class in accordance with the rules in [8], as it may contain more explicit information. Ingredients for extracting this information are the rules in theorem 1, in combination with the standard properties of causal relations: acyclic (if X ? Y then Y ?  X) and transitivity (if X ? Y and Y ? Z then X ? Z). As the causal system is assumed invariant, the established (absence of) causal relations in one model are valid in all models. A straightforward brute-force implementation is given by Algorithm 1. The input is a set of CPAG models Pi , representing the conditional (in)dependence information between a set of observed variables, e.g. as learned by the extended FCI-algorithm [1, 8], from a number of different experiments (i) GT on an invariant causal system GC . The output is the single causal CPAG G over the union of all nodes in the input models Pi . 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: Input : set of CPAGs Pi , fully ??? connected graph G Output : causal graph G for all Pi do G ? eliminate all edges not appearing between nodes in Pi . Rule (3) G ? all definite (non)causal connections between nodes in Pi . invariant structure end for repeat for all Pi do for all {X, Y, Z, W} ? Pi do G ? (Z  ? {X, Y, W}), if X ?? Y | W and X ?  ? Y | {W ? Z} . Rule (2) G ? (Z ? Y ), if [X ?? Y | {Z ? W}] and (Z  ? X) ? GC . Rule (1) end for end for until no more new non/causal information found Algorithm 1: Brute force implementation of rules (1)-(3) 4 Figure 2: Three different experiments, one causal model As an example, consider the three CPAG models on the l.h.s. of figure 2. None of these identifies a causal relation, yet despite the different (in)dependence relations, it is easily verified that the algorithm terminates after two loops with the nearly complete causal CPAG on the r.h.s. as the final output. Figure 1 shows corresponding experiments that explain the observed dependencies above. To the best of our knowledge, Algorithm 1 is the first algorithm ever to perform such a derivation. Nevertheless, this brute-force approach exhibits a number of serious shortcomings. In the first place, the computational complexity of the repeated loop over all subsets in line 7 makes it not scalable: for small models like the ones in figure 2 the derivation is almost immediate, but for larger models it quickly becomes unfeasible. Secondly, for sparsely overlapping models, i.e. when the observed variables differ substantially between the models, the algorithm can miss certain relations: when a causal relation is found to be absent between two non-adjacent nodes, then this information cannot be recorded in G, and subsequent causal information identifiable by rule (1) may be lost. These problems are addressed in the next section, resulting in the MCI-algorithm. 6 The MCI-algorithm To tackle the computational complexity we first introduce the following notion: a path hX, . . . , Y i in a CPAG is called a possibly directed path (or p.d. path) from X to Y , if it can be converted into a directed path by changing circle marks into appropriate tails and arrowheads [6]. We can now state: Theorem 2. Let X and Y be two variables in an experiment with causal structure GT = {GE + GC }, and let P[G] be the corresponding CPAG over a subset of observed nodes from GC . Then the absence of a causal link X  ? Y is detectable from the conditional (in)dependence structure in this experiment iff there exists no p.d. path from X to Y in P[G] . In other words: X cannot be a cause (ancestor) of Y if all paths from X to Y in the graph P[G] go against an invariant arrowhead (signifying non-ancestorship) and vice versa. We refer to this as rule (4). Calculating which variables are connected by a p.d. path from a given CPAG is straightforward: turn the graph into a {0, 1} adjacency matrix by setting all arrowheads to zero and all tails and circle marks to one, and compute the resulting reachability matrix. As this will uncover all detectable ?non-causal? relations in a CPAG in one go, it needs to be done only once for each model, and can be aggregated into a matrix MC to make all tests for rule (2) in line 8 superfluous. If we also record all other established (non)causal relations in the matrix MC as the algorithm progresses, then indirect causal relations are no longer lost when they cannot be transferred to the output graph G. The next lemma propagates indirect (non)causal information from MC to edge marks in the graph: Lemma 2. Let X, Y and Z be disjoint sets of variables in an experiment with causal structure GT = {GE + GC }, then for every [X ?? Y | Z]: ? every (indirect) causal relation X ? Y implies causal links Z ? Y , ? every (indirect) absence of causal relation X  ? Y implies no causal links X  ? Z. The first makes it possible to orient indirect causal chains, the second shortens indirect non-causal links. We refer to these as rules (5) and (6), respectively. As a final improvement it is worth noting that for rules (1), (5) and (6) it is only relevant to know that a node Z occurs in some Z in a minimal conditional independence relation [X ?? Y | Z] separating X and Y , but not what the other nodes in Z are or in what model(s) it occurred. We can introduce a structure SCI to record all nodes Z that occur in some minimal conditional independency in one of the models Pi for each combination of nodes (X, Y ), before any of the rules (1), (5) or (6) is processed. As a result, in the repeated causal inference loop no conditional independence / m-separation tests need to be performed at all. 5 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: Input : set of CPAGs Pi , fully ??? connected graph G Output : causal graph G, causal relations matrix MC MC ? 0 . no causal relations for all Pi do G ? eliminate all edges not appearing between nodes in Pi . Rule (3) MC ? (X  ? Y ), if no p.d. path hX, . . . , Y i ? Pi . Rule (4) MC ? (X ? Y ), if causal path hX ? . . . ? Y i ? Pi . transitivity for all (X, Y, Z) ? Pi do SCI ? triple (X, Y, Z), if Z ? Z for which [X ?? Y | Z] . combined SCI-matrix end for end for repeat for all (X, Y, Z) ? G do MC ? (Z ? Y ), for unused (X, Y, Z) ? SCI with (Z  ? X) ? MC . Rule (1) MC ? (X  ? Z), for unused (X  ? Y ) ? MC with (X, Y, Z) ? SCI . Rule (5) MC ? (Z ? Y ), for unused (X ? Y ) ? MC with (X, Y, Z) ? SCI . Rule (6) end for until no more new causal information found G ? non/causal info in MC . tails/arrowheads Algorithm 2: MCI algorithm With these results we can now give an improved version of the brute-force approach: the Multiple model Causal Inference (MCI) algorithm, above. The input is still a set of CPAG models from different experiments, but the output is now twofold: the graph G, containing the causal structure uncovered for the underlying system GC , as well as the matrix MC with an explicit representation of all (non)causal relations between observed variables, including remaining indirect information that cannot be read from the graph G. The first stage (lines 2-9) is a pre-processing step to extract all necessary information for the second stage from each of the models separately. Building the SCI matrix is the most expensive step as it involves testing for conditional independencies (m-separation) for increasing sets of variables. This can be efficiently implemented by noting that nodes connected by an edge will not be separated and that many other combinations will not have to be tested as they contain a subset for which a (minimal) conditional independency has already been established. If a (non)causal relation is found between adjacent variables in G, or one that can be used to infer other intermediate relations (lines 13-14), then it can be marked as ?processed? to avoid unnecessary checks. Similar for the entries recorded in the minimal conditional independence structure SCI . The MCI algorithm is provably sound in the sense that if all input CPAG models Pi are valid, then all (absence of) causal relations identified by the algorithm in the output graph G and (non)causal relations matrix MC are also valid, provided that the causal system GC is an invariant causal DAG and the causal faithfulness assumption is satisfied. 7 Experimental results We tested the MCI-algorithm on a variety of synthetic data sets to verify its validity and assess its behaviour and performance in uncovering causal information from multiple models. For the generation of random causal DAGs we used a variant of [13] to control the distribution of edges over nodes in the network. The random experiments in each run were generated from this causal DAG by including a random context and hidden nodes. For each network the corresponding CPAG was computed, and together used as the set of input models for the MCI-algorithm. The generated output G and MC was verified against the true causal DAG and expressed as a percentage of the true number of (non-)causal relations. To assess the performance we introduced two reference methods to act as a benchmark for the MCIalgorithm (in the absence of other algorithms that can validly handle different contexts). The first is a common sense method, indicated as ?sum-FCI?, that utilizes the transitive closure of all causal relations in the input CPAGs, that could have been identified by FCI in the large sample limit. As the 6 %!,8-952:,%!",/-865;6,8-952 <=> 21?!@=> 8/!=ABC &'" &'( &'! &'% &'# &') &'" &'( &'! &'% ! " # $ & %& <=> 21?!@=> 8/!=ABC &'J) +,KLK!/01203,453067-82,.-189 &') , % &'* &'# &, %!,8-952:,%!",/-865;6,8-952 &'$ +,-.,/01203,453067-82,.-189 &'* +,-.,/01203,453067-82,.-189 %!,8-952:,),?-9532 , &'$ &'J &'$) &'$ &'*) &'* &'#) & ! " # $ &'# , %& ! " DMI D0I # $ %& D/I DE0467033F,-G5430EE78H,8-952I +,-.,/01203,453067-82,.-189 &'* /01203,453067-82,.-189,!,+ <=> 21?!@=> 8/!=ABC &'# &') &'" &'( &'! &'% &, &'$ % &'* &'J) +,KLK!/01203,453067-82,.-189 , &'$ &'# &') &'" &'( &'! &'% ! " # $ 84',-.,97..54586,?-9532 %& & , <=> 21?!@=> 8/!=ABC &'J &'$) &'$ &'*) &'* &'#) & ! " # $ 84',-.,8-952,78,5;654803,/-865;6 %& &'# , ! " # $ 84',-.,97..54586,?-9532 %& Figure 3: Proportion of causal relations discovered by the MCI-algorithm in different settings; (a) causal relations vs. nr. of models, (b) causal relations vs. nr. of context nodes, (c) non-causal relations  ? vs. nr. of models; (top) identical observed nodes in input models, (bottom) only partially overlapping observed nodes second benchmark we take all causal information contained in the CPAG over the union of observed variables, independent of the context, hence ?nc-CPAG? for ?no context?. Note that this is not really a method as it uses information directly derived from the true causal graph. In figure 3, each graph depicts the percentage of causal (a&b) or non-causal (c) relations uncovered by each of the three methods: MCI, sum-FCI and nc-CPAG, as a function of the number of input models (a&c) or the number of nodes in the context (b), averaged over 200 runs, for both identical (top) or only partly overlapping (bottom) observed nodes in the input models. Performance is calculated as the proportion of uncovered relations as compared to the actual number of non/causal relations in the true causal graph over the union of observed nodes in each model set. In these runs the underlying causal graphs contained 12 nodes with edge degree ? 5. Tests for other, much sparser/denser graphs up to 20 nodes, showed comparable results. Some typical behaviour is easily recognized: MCI always outperforms sum-FCI, and more input models always improve performance. Also non-causal information (c) is much easier to find than definite causal relations (a&b). For single models / no context the performance of all three methods is very similar, although not necessarily identical. The perceived initial drop in performance in fig.3(c,bottom) is only because in going to two models the number of non-causal relations in the union rises more quickly than the number of new relations that is actually found (due to lack of overlap). A striking result that is clearly brought out is that adding random context actually improves detection rate of causal relations. The rationale behind this effect is that externally induced links can introduce conditional dependencies, allowing the deduction of non-causal links that are not otherwise detectable; these in turn may lead to other causal relations that can be inferred, and so on. If the context is expanded further, at some point the detection rate will start to deteriorate as the causal structure will be swamped by the externally induced links (b). We want to stress that for the many tens of thousands of (non)causal relations identified by the MCI algorithm in all the runs, not a single one was found to be invalid on comparing with the true causal graph. For & 8 nodes the algorithm spends the majority of its time building the SCI matrix in lines 6-8. The actual number of minimal conditional independencies found, however, is quite low, typically in the order of a few dozen for graphs of up to 12 nodes. 7 8 Conclusion We have shown the first principled algorithm that can use results from different experiments to uncover new (non)causal information. It is provably sound in the large sample limit, provided the input models are learned by a valid algorithm like the FCI algorithm with CPAG extension [8]. In its current implementation the MCI-algorithm is a fast and practical method that can easily be applied to sets of models of up to 20 nodes. Compared to related algorithms like ION, it produces very concise and easily interpretable output, and does not suffer from the inability to handle any differences in observed dependencies between data sets [3]. For larger models it can be converted into an anytime algorithm by running over minimal conditional independencies from subsets of increasing size: at each level all uncovered causal information is valid, and, for reasonably sparse models, most will already be found at low levels. For very large models an exciting possibility is to target only specific causal relations: finding the right combination of (in)dependencies is sufficient to decide if it is causal, even when there is no hope of deriving a global CPAG model. From the construction of the MCI-algorithm it is sound, but not necessarily complete. Preliminary results show that theorem 1 already covers all invariant arrowheads in the single model case [8], and suggest that an additional rule is sufficient to cover all tails as well. We aim to extend this result to the multiple model domain. Integrating our approach with recent developments in causal discovery that are not based on independence constraints [11, 12] can provide for even more detectable causal information. When applied to real data sets the large sample limit no longer applies and inconsistent causal relations may result. It should be possible to exclude the contribution of such links on the final output. Alternatively, output might be generalized to quantities like ?the probability of a causal relation? based on the strength of appropriate conditional (in)dependencies in the available data. Acknowledgement This research was supported by VICI grant 639.023.604 from the Netherlands Organization for Scientific Research (NWO). Appendix - proof outlines Theorem 1 / Lemma 1 (for details see also [14]) (1) Without selection bias, nodes X and Y are dependent iff they are connected by treks in GT . A node Z ? Z that blocks such a trek has a directed path in GC to X and/or Y , but can unblock other paths. These paths contain a trek between Z and X/Y , and must blocked by a node Z 0 ? Z\Z , which therefore also has a causal link to X or Y (possibly via Z). Z 0 in turn can unblock other paths, etc. Minimality guarantees that it holds for all nodes in Z. Eliminating Z ? X then leaves Z ? Y as the only option (lemma 1). (2) To create the dependency, W must be a (descendant of a) collider between two unblocked paths ?1 = hX, . . . , W i and ?2 = hY, . . . , W i given Z. Any directed path from W to a Z ? Z implies that conditioning on W is not needed when already conditioning on Z. In combination with ?1 or ?2 , a directed path from W to X or Y in GC would make W a noncollider on an unblocked path between X and Y given Z, contrary to X ?? Y | Z. (3) A directed path between X and Y that is not blocked by Z would result in X ?  ? Y | Z, see [1]. Theorem 2 ??? follows from the fact that a directed path ? = hX, . . . , Y i in the underlying causal DAG GC implies existence of a directed path in the true MAG over the observed nodes and therefore at least the existence of a p.d. path in the CPAG P[G] . ??? follows from the completeness of the CPAG in combination with theorem 2 in [8] about orientability of CPAGs into MAGs. This, together with Meek?s algorithm [15] for orienting chordal graphs into DAGs with no unshielded colliders, shows that it is always possible to turn a p.d. path into a directed path in a MAG that is a member of the equivalence class P[G] . Therefore, a p.d. path from X to Y in P[G] implies there is at least some underlying causal DAG in which it is a causal path, and so cannot correspond to a valid, detectable absence of a causal link.  Lemma 2 From rule (1) in Theorem 1, [X ? ? Y | Z] implies causal links Z ? X and/or Z ? Y . If X ? Y then by transitivity Z ? X also implies Z ? Y . If X  ? Y then for any Z ? Z, X ? Z implies Z ? Y and so (transitivity) also X ? Y , in contradiction of the given; therefore X  ? Z.  8 References [1] P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search. Cambridge, Massachusetts: The MIT Press, 2nd ed., 2000. [2] D. Chickering, ?Optimal structure identification with greedy search,? Journal of Machine Learning Research, vol. 3, no. 3, pp. 507?554, 2002. [3] R. Tillman, D. Danks, and C. Glymour, ?Integrating locally learned causal structures with overlapping variables,? in Advances in Neural Information Processing Systems, 21, 2008. [4] S. Mani, G. Cooper, and P. Spirtes, ?A theoretical study of Y structures for causal discovery,? in Proceedings of the 22nd Conference in Uncertainty in Artificial Intelligence, pp. 314?323, 2006. [5] J. Pearl, Causality: models, reasoning and inference. Cambridge University Press, 2000. [6] J. Zhang, ?Causal reasoning with ancestral graphs,? Journal of Machine Learning Research, vol. 9, pp. 1437 ? 1474, 2008. [7] T. Richardson and P. Spirtes, ?Ancestral graph Markov models,? Ann. Stat., vol. 30, no. 4, pp. 962?1030, 2002. [8] J. Zhang, ?On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias,? Artificial Intelligence, vol. 172, no. 16-17, pp. 1873 ? 1896, 2008. [9] J. Zhang and P. Spirtes, ?Detection of unfaithfulness and robust causal inference,? Minds and Machines, vol. 2, no. 18, pp. 239?271, 2008. [10] P. Spirtes, C. Meek, and T. Richardson, ?An algorithm for causal inference in the presence of latent variables and selection bias,? in Computation, Causation, and Discovery, pp. 211?252, 1999. [11] S. Shimizu, P. Hoyer, A. Hyv?arinen, and A. Kerminen, ?A linear non-Gaussian acyclic model for causal discovery,? Journal of Machine Learning Research, vol. 7, pp. 2003?2030, 2006. [12] P. Hoyer, D. Janzing, J. Mooij, J. Peters, and B. Sch?olkopf, ?Nonlinear causal discovery with additive noise models,? in Advances in Neural Information Processing Systems 21 (NIPS*2008), pp. 689?696, 2009. [13] J. Ide and F. Cozman, ?Random generation of Bayesian networks,? in Advances in Artificial Intelligence, pp. 366?376, Springer Berlin, 2002. [14] T. Claassen and T. Heskes, ?Learning causal network structure from multiple (in)dependence models,? in Proceedings of the Fifth European Workshop on Probabilistic Graphical Models, 2010. [15] C. Meek, ?Causal inference and causal explanation with background knowledge,? in UAI, pp. 403?410, Morgan Kaufmann, 1995. 9
3961 |@word trial:2 version:1 eliminating:1 proportion:2 nd:2 open:2 closure:1 hyv:1 accounting:1 concise:2 klk:2 carry:1 initial:1 contains:1 uncovered:4 mag:10 outperforms:1 existing:1 current:1 contextual:1 comparing:1 chordal:1 yet:2 must:2 subsequent:2 additive:1 informative:2 enables:1 drop:1 interpretable:2 unshielded:2 v:3 greedy:2 discovering:1 leaf:1 tillman:2 intelligence:3 record:2 completeness:2 node:48 zhang:3 direct:4 become:1 descendant:2 combine:1 introduce:3 deteriorate:1 behavior:1 nor:1 decomposed:1 actual:2 increasing:2 becomes:2 provided:3 underlying:6 what:2 substantially:1 spends:1 unobserved:4 finding:1 guarantee:1 every:7 hypothetical:1 act:1 tackle:1 brute:4 control:1 grant:1 intervention:1 scrutiny:1 causally:1 before:2 local:3 accordance:1 limit:6 despite:1 path:42 might:1 minimally:1 equivalence:3 averaged:1 directed:18 practical:2 unique:1 testing:1 practice:1 union:4 definite:2 lost:2 block:1 fci:7 area:3 word:1 pre:1 integrating:2 suggest:2 unfeasible:1 cannot:5 selection:4 context:15 straightforward:2 go:2 decomposable:1 contradiction:1 rule:27 deriving:1 embedding:1 handle:3 notion:2 variation:2 justification:1 target:2 construction:1 us:1 origin:2 expensive:1 sparsely:1 database:1 blocking:1 observed:21 bottom:3 hv:1 thousand:1 connected:9 cycle:1 principled:2 environment:2 complexity:2 learner:1 basis:1 gt0:2 claassen:2 easily:5 joint:2 indirect:7 represented:3 various:1 cozman:1 derivation:2 separated:2 distinct:1 fast:2 shortcoming:1 radboud:2 artificial:3 d0i:1 quite:1 larger:2 denser:1 otherwise:3 richardson:2 itself:1 final:4 seemingly:1 sequence:1 interaction:1 maximal:1 relevant:1 combining:1 loop:3 iff:8 intuitive:1 olkopf:1 empty:3 double:1 produce:2 develop:1 stat:1 measured:1 progress:1 implemented:1 c:2 reachability:1 implies:14 indicate:1 involves:1 differ:3 direction:1 collider:6 correct:1 observational:2 elimination:1 adjacency:2 arinen:1 hx:6 behaviour:2 suffices:1 really:1 investigation:1 preliminary:2 secondly:1 traversed:1 extension:2 underpinning:1 hold:3 great:1 adopt:1 perceived:1 nwo:1 vice:1 faithfully:1 establishes:2 create:1 hope:1 brought:1 clearly:1 mit:1 always:5 danks:1 aim:1 gaussian:1 avoid:1 derived:1 focus:1 improvement:1 check:1 sense:3 inference:7 dependent:2 entire:1 eliminate:3 typically:1 hidden:2 relation:51 ancestor:3 deduction:1 going:1 provably:4 uncovering:1 orientation:1 denoted:2 development:3 marginal:1 dmi:1 once:1 never:1 having:1 identical:5 represents:1 unnecessarily:1 nearly:1 serious:1 few:2 causation:2 simultaneously:1 familiar:1 suit:1 detection:3 organization:1 possibility:1 introduces:2 entailed:1 bracket:1 nl:2 behind:1 superfluous:1 chain:1 edge:10 partial:1 necessary:1 arrowhead:8 circle:5 causal:148 theoretical:1 minimal:11 modeling:3 corroborate:1 cover:2 kerminen:1 vertex:5 subset:6 entry:1 conducted:1 dependency:20 synthetic:2 combined:1 confounders:2 ancestral:4 standing:1 probabilistic:3 minimality:1 together:4 quickly:2 recorded:2 satisfied:1 containing:1 choose:1 possibly:5 external:8 actively:1 account:2 converted:2 exclude:1 gaussianity:1 explicitly:1 vi:2 piece:2 performed:1 start:1 option:2 contribution:1 ass:3 square:1 kaufmann:1 efficiently:1 correspond:2 identify:1 identification:3 bayesian:2 none:1 mc:17 worth:1 explain:1 janzing:1 complicate:1 ed:1 definition:2 against:2 pp:11 proof:2 massachusetts:1 knowledge:6 anytime:1 improves:1 ubiquitous:1 uncover:3 actually:2 tom:2 maximally:1 improved:1 done:1 just:1 stage:2 until:2 ei:1 nonlinear:2 overlapping:6 lack:1 quality:1 indicated:2 scientific:1 orienting:1 unblocked:4 effect:2 building:2 contain:5 true:8 verify:1 validity:1 mani:2 hence:1 read:2 noncollider:3 spirtes:5 conditionally:2 adjacent:2 transitivity:4 noted:1 ide:1 criterion:2 generalized:1 stress:1 outline:2 complete:3 bring:1 reasoning:2 common:2 conditioning:2 tail:7 interpretation:1 occurred:1 extend:1 refer:3 blocked:3 versa:1 cambridge:2 dag:17 heskes:2 centre:1 longer:2 v0:1 gt:12 etc:4 recent:3 showed:1 irrelevant:1 certain:3 morgan:1 additional:2 recognized:2 aggregated:1 paradigm:1 dashed:1 multiple:13 sound:3 infer:3 clinical:1 long:1 unfaithfulness:1 post:1 controlled:2 impact:2 prediction:1 scalable:1 basic:1 variant:1 circumstance:2 essentially:2 represent:5 ion:2 background:5 whereas:1 separately:2 signify:1 want:2 addressed:1 source:3 leaving:1 crucial:1 sch:1 eliminates:2 unlike:1 bringing:1 induced:3 member:1 contrary:1 inconsistent:1 seem:1 extracting:1 presence:3 noting:2 unused:3 intermediate:1 trek:5 variety:2 independence:13 affect:2 identified:3 absent:1 suffer:1 peter:1 cause:1 clear:1 netherlands:3 ten:1 locally:1 processed:2 percentage:2 sign:1 disjoint:4 vol:6 independency:7 four:1 terminology:1 nevertheless:1 changing:1 verified:2 graph:30 merely:1 sum:3 orient:1 run:4 uncertainty:1 striking:1 place:1 throughout:2 almost:1 decide:1 vn:2 separation:4 utilizes:1 appendix:2 comparable:1 entirely:1 meek:3 distinguish:1 identifiable:2 adapted:1 occur:1 strength:1 constraint:3 precisely:1 hy:1 aspect:2 passively:1 expanded:1 transferred:1 glymour:2 combination:7 describes:1 across:1 terminates:1 making:1 swamped:1 invariant:9 heart:1 taken:1 scheines:1 remains:2 discus:1 detectable:5 turn:4 lined:1 needed:1 encapsulating:1 know:1 ge:11 mind:1 end:7 available:1 prerequisite:1 apply:1 appropriate:2 appearing:2 existence:2 top:2 remaining:1 running:1 graphical:5 calculating:1 especially:1 establish:1 already:4 quantity:1 occurs:1 dependence:7 nr:3 said:1 exhibit:1 hoyer:2 separate:1 link:19 separating:1 sci:9 majority:1 berlin:1 originate:1 ru:2 concomitant:1 nc:2 difficult:1 statement:1 relate:1 info:1 nijmegen:2 stated:1 rise:1 implementation:4 proper:1 unknown:2 perform:1 allowing:1 observation:1 markov:3 benchmark:2 immediate:1 extended:1 ever:1 gc:32 discovered:1 arbitrary:2 inferred:1 introduced:1 pair:5 connection:2 subpath:1 faithfulness:2 learned:3 established:4 pearl:1 nip:1 vici:1 pattern:2 challenge:1 including:5 reliable:1 explanation:1 overlap:1 natural:1 rely:1 force:4 representing:1 improve:1 identifies:3 created:1 categorical:1 mediated:1 extract:1 transitive:1 discovery:13 acknowledgement:1 mooij:1 embedded:1 fully:2 rationale:1 generation:2 acyclic:3 remarkable:1 ingredient:1 triple:1 integrate:1 degree:1 sufficient:2 article:6 propagates:1 exciting:1 pi:16 surprisingly:1 repeat:2 supported:1 bias:4 shortens:1 fifth:1 sparse:1 depth:1 calculated:1 valid:8 world:1 contradict:1 global:1 uai:1 assumed:1 unnecessary:1 alternatively:1 search:3 latent:2 why:1 promising:1 reasonably:1 robust:1 hv0:1 necessarily:3 european:1 domain:1 arrow:5 reconcile:1 arise:1 noise:1 contradicting:1 repeated:2 convey:1 fig:1 causality:1 depicts:2 differed:1 cooper:1 explicit:2 candidate:2 chickering:1 third:1 externally:2 z0:1 theorem:9 dozen:1 specific:1 showing:1 multitude:1 exists:3 workshop:1 adding:1 sparser:1 easier:1 shimizu:1 simply:1 expressed:1 contained:2 partially:2 applies:1 springer:1 corresponds:1 abc:4 conditional:25 goal:1 marked:1 ann:1 exposition:1 towards:1 invalid:1 twofold:1 absence:11 typical:1 miss:1 lemma:7 total:2 called:1 partly:1 mci:16 experimental:3 internal:1 mark:5 inability:1 signifying:1 evaluate:1 tested:2
3,270
3,962
Non-Stochastic Bandit Slate Problems Satyen Kale Yahoo! Research Santa Clara, CA Lev Reyzin? Georgia Inst. of Technology Atlanta, GA Robert E. Schapire? Princeton University Princeton, NJ [email protected] [email protected] [email protected] Abstract We consider bandit problems, motivated by applications in online advertising and news story selection, in which the learner must repeatedly select a slate, that is, a subset of size s from K possible actions, and then receives rewards for just the selected actions. The goal is to minimize the regret with respect to total reward of the best slate computed in hindsight. We consider unordered and ordered versions ? of the problem, and give efficient algorithms which have regret O( T ), where the constant depends on the specific nature of the problem. We also consider versions of the problem where we have access to a number of policies which?make recommendations for slates in every round, and give algorithms with O( T ) regret for competing with the best such policy as well. We make use of the technique of relative entropy projections combined with the usual multiplicative weight update algorithm to obtain our algorithms. 1 Introduction In traditional bandit models, the learner is presented with a set of K actions. On each of T rounds, an adversary (or the world) first chooses rewards for each action, and afterwards the learner decides which action it wants to take. The learner then receives the reward of its chosen action, but does not see the rewards of the other actions. In the standard bandit setting, the learner?s goal is to compete with the best fixed arm in hindsight. In the more general ?experts setting,? each of N experts recommends an arm on each round, and the goal of the learner is to perform as well as the best expert in hindsight. The bandit setting tackles many problems where a learner?s decisions reflect not only how well it performs but also the data it learns from ? a good algorithm will balance exploiting actions it already knows to be good and exploring actions for which its estimates are less certain. One such real-world problem appears in computational advertising, where publishers try to present their customers with relevant advertisements. In this setting, the actions correspond to advertisements, and choosing an action means displaying the corresponding ad. The rewards correspond to the payments from the advertiser to the publisher, and these rewards depend on the probability of users clicking on the ads. Unfortunately, many real-world problems, including the computational advertising problem, do not fit so nicely into the traditional bandit framework. Most of the time, advertisers have the ability to display more than one ad to users, and users can click on more than one of the ads displayed to them. To capture this reality, in this paper we define the slate problem. This setting is similar to the traditional bandit setting, except that here the advertiser selects a slate, or subset, of S actions. In this paper we first consider the unordered slate problem, where the reward to the learning algorithm is the sum of the rewards of the chosen actions in the slate. This setting is applicable when all ? This work was done while Lev Reyzin was at Yahoo! Research, New York. This material is based upon work supported by the National Science Foundation under Grant #0937060 to the Computing Research Association for the Computing Innovation Fellowship program. ? This work was done while R. Schapire was visiting Yahoo! Research, New York. 1 actions in a slate are treated equally. While this is a realistic assumption in certain settings, we also deal with the case when different positions in a slate have different importance. Going back to our computational advertising example, we can see not all ads are given the same treatment (i.e. an ad displayed higher in a list is more likely to be clicked on). One may plausibly assume that for every ad and every position that it can be shown in, there is a click-through-rate associated with the (ad, position) pair, which specifies the probability that a user will click on the ad if it is displayed in that position. This is a very general user model used widely in practice in web search engines. To abstract this, we turn to the ordered slate problem, where for each action and position in the ordering, the adversary specifies a reward for using the action in that position. The reward to the learner then is the sum of the rewards of the (actions, position) pairs in the chosen ordered slate.1 This setting is similar to that of Gy?orgy, Linder, Lugosi and Ottucs?ak [10] in that the cost of all actions in the chosen slate are revealed, rather than just the total cost of the slate. Finally, we show how to tackle these problems in the experts setting, where instead of competing with the best slate in hindsight, the algorithm competes with the best expert, recommending different slates on different rounds. One key idea appearing in our algorithms is to use a variant of the multiplicative weights expert algorithm for a restricted convex set of distributions. In our case, the restricted set of distributions over actions corresponds to the one defined by the stipulation that the learner choose a slate instead of individual actions. Our variant first finds the distribution generated by multiplicative weights and then chooses the closest distribution in the restricted subset using relative entropy as the distance metric ? this is a type of Bregman projection, which has certain nice properties for our analysis. Previous Work. The multi-armed bandit problem, first studied by Lai and Robbins [15], is a classic problem which has had wide application. In the stochastic setting, where the rewards of the arms are i.i.d., Lai and Robbins [15] and Auer, Cesa-Bianchi and Fischer [2] gave regret p bounds of O(K ln(T )). In the non-stochastic setting, Auer et al. [3] gave regret bounds of O( K ln(K)T ).2 This non-stochastic setting of the multi-armed bandit problem is exactly the specific case of our problem when the slate size is 1, and hence our results generalize those of Auer et al. which can be recovered by setting s = 1. Our problem is a special case of the more general online linear optimization with bandit feedback problem [1, 4,p5, 11]. Specializing the best result in this series to our setting, we get worse regret bounds of O( T log(T )). The constant in the O(?) notation is also worse than our bounds. For a more specific comparison of regret bounds, see Section 2. Our algorithms, being specialized for the slates problem, are simpler to implement as well, avoiding the sophisticated self-concordant barrier techniques of [1]. This work also builds upon the algorithm in [18] to learn subsets of experts and the algorithm in [12] for learning permutations, both in the full information setting. Our work is also a special case of the Combinatorial Bandits setting of Cesa-Bianchi and Lugosi [9]; however, our algorithms obtain better regret bounds and are computationally more efficient. Our multiplicative weights algorithm also appears under the name Component Hedge in the independent work of Koolen, Warmuth and Kivinen [14]. Furthermore, the expertless, unordered slate problem is studied by Uchiya, Nakamura and Kudo [17] who obtain the same asymptotic bounds as appear in this paper, though using different techniques. 2 Statement of the problem and main results P Notation. For vectors x, y ? RK , x ? y denotes their inner product, viz. i xi yi . For matrices X, Y ? Rs?K , X ? Y denotes their inner product considering them vectors in RsK , viz. 1 The unordered slate problem is a special case of the ordered slate problem for which all positional factors are equal. However, the bound on the regret that we get when we consider the unordered slate problem ? ?s) better than when we treat it as a special case of the ordered slate problem. separately is a factor of O( 2 The difference in the regret bounds can be attributed to the definition of regret in the stochastic and nonstochastic settings. In the stochastic setting, we compare the algorithm?s expected reward to that of the arm with the largest expected reward, with the expectation taken over the reward distribution. 2 P Xij Yij . For a set S of actions, let 1S be the indicator vector for that set. For two distributions P p and q, let RE(p k q) denote their relative entropy, i.e. RE(p k q) = i pi ln( pqii ). ij Problem Statement. In a sequence of rounds, for t = 1, 2, . . . , T , we are required to choose a slate from a base set A of K actions. An unordered slate is a subset S ? A of s out of the K actions. An ordered slate is a slate together with an ordering over its s actions; thus, it is a one-to-one mapping ? : {1, 2, . . . , s} ? A. Prior to the selection of the slate, the adversary chooses losses3 for the actions in the slates. Once the slate is chosen, the cost of only the actions in the chosen slate is revealed. This cost is defined in the following manner: ? Unordered slate. The adversary chooses a loss vector `(t) ? RK which specifies a loss `j (t) ? [?1, 1] for every action j ? A. For a chosen slate S,Ponly the coordinates `j (t) for j ? S are revealed, and the cost incurred for choosing S is j?S `j (t). ? Ordered slate. The adversary chooses a loss matrix L(t) ? Rs?K which specifies a loss Lij (t) ? [?1, 1] for every action j ? A and every position i, 1 ? i ? s, in the ordering on the slate. For a chosen slate ?, the entries Li,?(i) (t) for every position i are revealed, and Ps the cost incurred for choosing ? is i=1 Li,?(i) (t). In the unordered slate problem, if slate S(t) is chosen in round t, for t = 1, 2, . . . , T , then the regret of the algorithm is defined to be RegretT = T X X `j (t) ? min S t=1 j?S(t) T X X `j (t). t=1 j?S Here, the subscript S is used as a shorthand for ranging over all slates S. The regret for the ordered slate problem is defined analogously. Our goal is to design a randomized algorithm for online slate selection such that E[RegretT ] = o(T ), where the expectation is taken over the internal randomization of the algorithm. Competing with policies. Frequently in applications we have access to N policies which are algorithms that recommend slates to use in every round. These policies might leverage extra information that we have about the losses in the next round. It is therefore beneficial to devise algorithms that have low regret with respect to the best policy in the pool in hindsight, where regret is defined as: RegretT = T X X `j (t) ? min ? t=1 j?S(t) T X X `j (t). t=1 j?S? (t) Here, ? ranges over all policies, S? (t) is the recommendation of policy ? at time t, and S(t) is the algorithm?s chosen slate. The regret is defined analogously for ordered slates. More generally, we may allow policies to recommend distributions over slates, and our goal is to minimize the expected regret with respect to the best policy in hindsight, where the expectation is taken over the distribution recommended by the policy as well as the internal randomization of the algorithm. Our results. We are now able to formally state our main results: Theorem 2.1. There are efficient (running in poly(s, K) time in the no-policies case, and in poly(s, K, N ) time with N policies) randomized algorithms achieving the following regret bounds: p Unordered slates p Ordered slates No policies 4 psK ln(K/s)T (Sec. 3.2) 4spK ln(K)T (Sec. 3.3) N policies 4 sK ln(N )T (Sec. 4.1) 4s K ln(N )T (Sec. 4.2) To compare, thepbest bounds obtained for the no-policies case using the more general algorithms [1] p and [9] are O( s3 K ln(K/s)T ) in the unordered slates problem, and O(s2 K ln(K)T ) in the ordered slates problem. ? It is also possible, in the no-policies setting, to devise algorithms that have regret bounded by O( T ) with high probability, using the upper confidence bounds technique of [3]. We omit these algorithms in this paper for the sake of brevity. 3 Note that we switch to losses rather than rewards to be consistent with most recent literature on online learning. Since we allow negative losses, we can easily deal with rewards as well. 3 Algorithm MW(P) Initialization: An arbitrary probability distribution p(1) ? P on the experts, and some ? > 0. For t = 1, 2, . . . , T : 1. Choose distribution p(t) over experts, and observe the cost vector `(t). ? (t + 1) using the following multiplicative update rule: 2. Compute the probability vector p for every expert i, p?i (t + 1) = pi (t) exp(??`i (t))/Z(t) (1) P where Z(t) = i pi (t) exp(??`i (t)) is the normalization factor. ? (t + 1) on the set P using the RE as a distance 3. Set p(t + 1) to be the projection of p ? (t + 1)). function, i.e. p(t + 1) = arg minp?P RE(p k p Figure 1: The Multiplicative Weights Algorithm with Restricted Distributions 3 3.1 Algorithms for the slate problems with no policies Main algorithmic ideas Our starting point is the Hedge algorithm for learning online with expert advice. In this setting, on each round t, the learner chooses a probability distribution p(t) over experts, each of which then suffers a (fully observable) loss represented by the vector `(t). The learner?s loss is then p(t) ? `(t). The main idea of our approach is to apply Hedge (and ideas from bandit variants of it, especially Exp3 [3]) by associating the probability distributions that it selects with mixtures of (ordered or unordered) slates, and thus with the randomized choice of a slate. However, this requires that the selected probability distributions have a particular form, which we describe shortly. We therefore need a special variant of Hedge which uses only distributions p(t) from some fixed convex subset P of the simplex of all distributions. The goal then is to minimize regret relative to an arbitrary distribution p ? P. Such a version of Hedge is given in Figure 1, and a statement of its performance below. This algorithm is implicit in the work of [13, 18]. Theorem 3.1. Assume that ? > 0 is chosen so that for all t and i, ?`i (t) ? ?1. Then algorithm MW(P) generates distributions p(1), . . . , p(T ) ? P, such that for any p ? P, T X `(t) ? p(t) ? `(t) ? p ? ? t=1 T X t=1 (`(t))2 ? p(t) + RE(p k p(1)) . ? Here, (`(t))2 is the vector that is the coordinate-wise square of `(t). 3.2 Unordered slates with no policies To apply the approach described above, we need a way to compactly represent the set of distributions over slates. We do this by embedding slates as points in some high-dimensional Euclidean space, and then giving a compact representation of the convex hull of the embedded points. Specifically, we represent an unordered slate S by its indicator vector 1S ? RK , which is 1 for all coordinates j ? S, and 0 for all others. The convex hull X of all such 1S vectors can be succinctly described [18] as the PK convex polytope defined by the linear constraints j=1 xj = s and xj ? 0 for j = 1, . . . , K. An algorithm is given in [18] (Algorithm 2) to decompose any vector x ? X into a convex combination of at most K indicator vectors 1S . We embed the convex hull X of all the 1S vectors in the simplex of distributions over the K actions simply by scaling down all coordinates by s so that they sum to 1. Let P be this scaled down version of X . Our algorithm is given in Figure 2. ? (t + 1)), which can be solved by Step 3 of MW(P) requires us to compute the arg minp?P RE(p k p convex programming. A linear time algorithm is given in [13], and a simple algorithm (from [18]) is the following: find the least index k such that clipping the largest k coordinates of p to 1s and rescaling the rest of the coordinates to sum up to 1 ? ks ensures that all coordinates are at most 1s , and output the probability vector thus obtained. This can be implemented by sorting the coordinates, and so it takes O(K log(K)) time. 4 Bandit Algorithm for Unordered Slates Initialization: Start an instance of MW(P) with the uniform initial distribution p(1) = q q ln(K/s) ? = (1??)sKT , and ? = (K/s) Tln(K/s) . For t = 1, 2, . . . , T : 1 K 1. Set 1. Obtain the distribution p(t) from MW(P). ? 2. Set p0 (t) = (1 ? ?)p(t) + K 1A . 0 0 3. Note that p (t) ? P. Decompose spP (t) as a convex combinationP of slate vectors 1S corresponding to slates S as sp0 (t) = S qS 1S , where qS > 0 and S qS = 1. 4. Choose a slate S to display with probability qS , and obtain the loss `j (t) for all j ? S. 5. Set `?j (t) = `j (t)/(sp0 (t)) if j ? S, and 0 otherwise. j 6. Send ?`(t) as the loss vector to MW(P). Figure 2: The Bandit Algorithm with Unordered Slates We now prove the regret bound of Theorem 2.1. We use the notation Et [X] to denote the expectation of a random variable X conditioned on all the randomness chosen by the algorithm up to round t, assuming that X is measurable with respect to this randomness. We note the following facts: P P `j (t) Et [`?j (t)] = S3j qS ? sp0j (t) = `j (t), since p0j (t) = S3j qS ? 1s . This immediately implies that Et [?`(t) ? p(t)] = `(t) ? p(t) and E[?`(t) ? p] = `(t) ? p, for any fixed distribution p. Note that if we decompose a distribution p ? P as a convex combination of 1s 1S vectors and randomly choose a slate S according to its weight in the combination, then the expected loss, averaged over the s actions chosen, is `(t) ? p. We can bound the difference between the expected loss (averaged over the s actions) in round t suffered by the algorithm, `(t) ? p0 (t), and `(t) ? p(t) as follows: X X ? `(t) ? p0 (t) ? `(t) ? p(t) = `j (t)(p0j (t) ? pj (t)) ? `j (t) ? ? ?. K j j Using this bound and Theorem 3.1, if S ? = arg minS P t `(t) ? 1s 1S , we have X RE( 1s 1S ? k p(1)) 1 E[RegretT ] X 2 = `(t) ? p0 (t) ? `(t) ? 1S ? ? ? + ?T. E[(?`(t)) ? p(t)] + s s ? t t We note that the leading factor of 1s on the expected regret is due to the averaging over the s positions. We now bound the terms on the RHS. First, we have ? ? X X (`j (t))2 pj (t) 2 ? qS ? ? E[(?`(t)) ? p(t)] = (sp0j (t))2 t S j?S " # " # X (`j (t))2 pj (t) X X (`j (t))2 pj (t) K = ? qS = ? sp0j (t) ? , 0 0 2 2 (spj (t)) (spj (t)) s(1 ? ?) j j S3j because pj (t) p0j (t) ? 1 1?? , and all |`j (t)| ? 1. p KT s ln(K/s) + + s?T ? 4 sK ln(K/s)T , 1?? ? q and ? = (K/s) Tln(K/s) . E[RegretT ] ? ? by setting ? = q (1??)s ln(K/s) KT K It remains to verify that ? `?j (t) ? ?1 for all i and t. We know that `?j (t) ? ? s? , because p0j (t) ? q ln(K/s) so all we need to check is that (1??)sKT ? s? K , which is true for our choice of ?. 5 ? K, Bandit Algorithm for Ordered Slates Initialization: Start an instance of MW(P) with the uniform initial distribution p(1) = q q ln(K) K ln(K) ? = (1??) and ? = . For t = 1, 2, . . . , T : KT T 1 sK 1. Set 1. Obtain the distribution p(t) from MW(P). ? 2. Set p0 (t) = (1 ? ?)p(t) + sK 1A . 0 3. Note that p (t) ? P, and so sp0 (t) ? M. Decompose sp0 (t)P as a convex combination ? of MP matrices corresponding to ordered slates ? as sp0 (t) = ? q? M? , where q? > 0 and ? q? = 1. 4. Choose a slate ? to display w.p. q? , and obtain the loss Li,?(i) (t) for all 1 ? i ? s. (t) ? as follows: for 1 ? i ? s, set L ? i,?(i) (t) = Li,?(i) , and 5. Construct the loss matrix L(t) 0 spi,?(i) (t) all other entries are 0. ? as the loss vector to MW(P). 6. Send L(t) Figure 3: Bandit Algorithm for Ordered Slates 3.3 Ordered slates with no policies A similar approach can be used for ordered slates. Here, we represent an ordered slate ? by the subpermutation matrix M? ? Rs?K which is defined as follows: for i = 1, 2, . . . , s, we have ? Mi,?(i) = 1, and all other entries are 0. In [7, 16], it is shown that the convex hull M of all the M? PK matrices is the convex polytope defined by the linear constraints: j=1 Mij = 1 for i = 1, . . . , s; Ps i=1 Mij ? 1 for j = 1, . . . , K; and Mij ? 0 for i = 1, . . . , s and j = 1, . . . , K. Clearly, all subpermutation matrices M? ? M. To complete the characterization of the convex hull, we can show (details omitted) that given any matrix M ? M, we can efficiently decompose it into a convex combination of at most K 2 subpermutation matrices. We identify matrices in Rs?K with vectors in RsK in the obvious way. We embed M in the simplex of distributions in RsK simply by scaling all the entries down by s so that their sum equals one. Let P be this scaled down version of M. Our algorithm is given in Figure 3. The projection in step 3 of MW(P) can be computed simply by solving the convex program. In practice, however, noticing that the relative entropy projection is a Bregman projection, the cyclic projections method of Bregman [6, 8] is likely to work faster. Adapted to the specific problem at hand, this method works as follows (see [8] for details): first, for every column j, initialize a dual variable ?j = 1. Then, alternate between row phases and column phases. In a row phase, iterate over all rows, and rescale them to make them sum to 1s . The column phase is a little more complicated: first, for every column j, compute the scaling factor ? to make it sum to 1s . Set ?0 = min{?j , ?}, and scale the column by ?0 , and update ?j ? ?j /?0 . Repeat these alternating row and column phases until convergence to within the desired tolerance. ? ij (t)] = P The regret bound analysis is similar to that of Section 3.2. We have Et [L ?:?(i)=j q? ? Lij (t) ? ? = Lij (t), and hence Et [L(t) ? p(t)] = L(t) ? p(t) and E[L(t) ? p] = L(t) ? p. We can show 0 spij also that L(t) ? p0 (t) ? L(t) ? p(t) ? ?. Using this bound and Theorem 3.1, if ? ? = arg min? P t L(t) ? 1s M? , we have ? X RE( 1s M? kp(1)) ? 1 E[RegretT ] X 2 ? = L(t)?p0 (t)?L(t)? M? ? ? ?p(t)]+ +?T. E[(L(t)) s s ? t t We now bound the terms on the RHS. First, we have " s # " # s X K 2 X X (Li,?(i) (t))2 pi,?(i) (t) X X (L (t)) p (t) ij ij 2 ? ? p(t)] = q? ? = ? q? E[(L(t)) 0 0 2 2 (spi,?(i) (t)) (spij (t)) t ? i=1 i=1 j=1 ?:?(i)=j 6 Bandit Algorithm for Unordered Slates With Policies Initialization: Start an instance of MW with no restrictionsq over the set of distributions q over the N ln(N ) (K/s) ln(N ) , and ? = . policies, with the initial distribution r(1) = N1 1. Set ? = (1??)s KT T For t = 1, 2, . . . , T : 1. Obtain the distribution over policies r(t) from MW, and the recommended distribution over slates ?? (t) ? P for each policy ?. PN 2. Compute the distribution p(t) = ?=1 r? (t)?? (t). ? 3. Set p0 (t) = (1 ? ?)p(t) + K 1. 0 0 4. Note that p (t) ? P. Decompose spP (t) as a convex combinationP of slate vectors 1S 0 corresponding to slates S as sp (t) = S qS 1S , where qS > 0 and S qS = 1. 5. Choose a slate S to display with probability qS , and obtain the loss `j (t) for all j ? S. 6. Set `?j (t) = `j (t)/sp0 (t) if j ? S, and 0 otherwise. j 7. Set the loss of policy ? to be ?? (t) = ?`(t) ? ?? (t) in the MW algorithm. Figure 4: Bandit Algorithm for Unordered Slates With Policies " # s X K X K (Lij (t))2 pij (t) ? sp0ij (t) ? = , 0 2 (spij (t)) 1?? i=1 j=1 because pij (t) p0ij (t) ? 1 1?? , all |Lij (t)| ? 1. ? Finally, we have RE( 1s M? k p(1)) = ln(K). Plugging these bounds into the bound of Theorem 3.1, we get the stated regret bound from Theorem 2.1: p s ln(K) sKT + + s?T ? 4s K ln(K)T , 1?? ? q and ? = K ln(K) , which satisfy the necessary technical conditions. T E[RegretT ] ? ? by setting ? = 4 4.1 q (1??) ln(K) KT Competing with a set of policies Unordered Slates with N Policies In each round, every policy ? recommends a distribution over slates ?? (t) ? P, where P is the X scaled down by s as in Section 3.2. Our algorithm is given in Figure 4. Again the regret bound analysis is along the lines of Section 3.2. We have for any j, Et [`?j (t)] = P P `j (t) S3j qS ? sp0 (t) = `j (t). Thus, Et [?? (t)] = `(t) ? ?? (t), and hence Et [?(t) ? r(t)] = ? (`(t) ? j ?? (t))r? (t) = `(t) ? p(t). We can also show as before that `(t) ? p0 (t) ? `(t) ? p(t) ? ?. P Using this bound and Theorem 3.1, if ?? = arg min? t `(t) ? ?? (t), we have X RE(e?? kr(1)) E[RegretT ] X 2 = `(t) ? p0 (t) ? `(t) ? ??? (t) ? ? + ?T, E[(?(t)) ? r(t)] + s ? t t where e?? is the distribution (vector) that is concentrated entirely on policy ?? . We now bound the terms on the RHS. First, we have " # " # X X 2 ? ? ? (t))2 r? (t) ?? (t)2 r? (t) = E (`(t) E[(?(t)) ? r(t)] = E ? t t ?E t t ? " X ? # ((?`(t)) ? ?? (t))r? (t) = E[(?`(t))2 ? p(t)] ? 2 t ? 7 K . s(1 ? ?) Bandit Algorithm for Ordered Slates with Policies Initialization: Start an instance of MW with q no restrictions, over theq set of distributions over the N policies, starting with r(1) = For t = 1, 2, . . . , T : 1 N 1. Set ? = (1??) ln(N ) KT and ? = K ln(N ) . T 1. Obtain the distribution over policies r(t) from MW, and the recommended distribution over ordered slates ?? (t) ? P for each policy ?. PN 2. Compute the distribution p(t) = ?=1 r? (t)?? (t). ? 3. Set p0 (t) = (1 ? ?)p(t) + sK 1A . 4. Note that p0 (t) ? P, and so sp0 (t) ? X . Decompose sp0 (t) as Pa convex combination of M? P matrices corresponding to ordered slates ? as sp0 (t) = ? q? M? , where q? > 0 and ? q? = 1. 5. Choose a slate ? to display w.p. q? , and obtain the loss Li,?(i) (t) for all 1 ? i ? s. (t) ? as follows: for 1 ? i ? s, set L ? i,?(i) (t) = Li,?(i) 6. Construct the loss matrix L(t) , and 0 spi,?(i) (t) all other entries are 0. ? ? ? (t) in the MW algorithm. 7. Set the loss of policy ? to be ?? (t) = L(t) ? Figure 5: Bandit Algorithm for Ordered Slates with Policies The first inequality above follows from Jensen?s inequality, and the second one is proved exactly as in Section 3.2. Finally, we have RE(e?? k p(1)) = ln(N ). Plugging these bounds into the bound above, we get the stated regret bound from Theorem 2.1: p s ln(N ) KT + + s?T ? 4 sK ln(N )T , E[RegretT ] ? ? 1?? ? q q ln(N ) by setting ? = (1??)s and ? = (K/s)Tln(N ) , which satisfy the necessary technical condiKT tions. 4.2 Ordered Slates with N Policies In each round, every policy ? recommends a distribution over ordered slates ?? (t) ? P, where P is M scaled down by s as in Section 3.3. Our algorithm is given in Figure 5. ? playing The regret bound analysis is exactly along the lines of that in Section 4.1, with L(t) and L(t) ? the roles of `(t) and `(t) respectively, with the inequalities from Section 3.3. We omit the details for brevity. We get the stated regret bound from Theorem 2.1: p E[RegretT ] ? 4s K ln(N )T . 5 Conclusions and Future Work In this paper, we presented efficient algorithms for the unordered and ordered slate problems with ? regret bounds of O( T ), in the presence and and absence of policies, employing the technique of Bregman projections on a convex set representing the convex hull of slate vectors. Possible future work on this problem is in two directions. The first direction is to handle other user models for the loss matrices, such as models incorporating the following sort of interaction between the chosen actions: if two very similar ads are shown, and the user clicks on one, then the user is less likely to click on the other. Our current model essentially assumes no interaction. ? The second direction is to derive high probability O( T ) regret bounds for the slate problems in the presence of policies. The techniques of [3] only give such algorithms in the no-policies setting. References [1] A BERNETHY, J., H AZAN , E., AND R AKHLIN , A. Competing in the dark: An efficient algorithm for bandit linear optimization. In COLT (2008), pp. 263?274. 8 [2] AUER , P., C ESA -B IANCHI , N., AND F ISCHER , P. Finite-time analysis of the multiarmed bandit problem. Machine Learning 47, 2-3 (2002), 235?256. [3] AUER , P., C ESA -B IANCHI , N., F REUND , Y., AND S CHAPIRE , R. E. The nonstochastic multiarmed bandit problem. SIAM J. Comput. 32, 1 (2002), 48?77. [4] AWERBUCH , B., AND K LEINBERG , R. Online linear optimization and adaptive routing. J. Comput. Syst. Sci. 74, 1 (2008), 97?114. [5] BARTLETT, P. L., DANI , V., H AYES , T. P., K AKADE , S., R AKHLIN , A., AND T EWARI , A. High-probability regret bounds for bandit online linear optimization. In COLT (2008), pp. 335?342. [6] B REGMAN , L. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comp. Mathematics and Mathematical Physics 7 (1967), 200?217. [7] B RUALDI , R. A., AND L EE , G. M. On the truncated assignment polytope. Linear Algebra and its Applications 19 (1978), 33?62. [8] C ENSOR , Y., AND Z ENIOS , S. Parallel optimization. Oxford University Press, 1997. [9] C ESA -B IANCHI , N., AND L UGOSI , G. Combinatorial bandits. In COLT (2009). ? ? , G. The on-line shortest path [10] G Y ORGY , A., L INDER , T., L UGOSI , G., AND OTTUCS AK problem under partial monitoring. Journal of Machine Learning Research 8 (2007), 2369? 2403. [11] H AZAN , E., AND K ALE , S. Better algorithms for benign bandits. In SODA (2009), pp. 38?47. [12] H ELMBOLD , D. P., AND WARMUTH , M. K. Learning permutations with exponential weights. In COLT (2007), pp. 469?483. [13] H ERBSTER , M., AND WARMUTH , M. K. Tracking the best linear predictor. Journal of Machine Learning Research 1 (2001), 281?309. [14] KOOLEN , W. M., WARMUTH , M. K., AND K IVINEN , J. Hedging structured concepts. In COLT (2010). [15] L AI , T., AND ROBBINS , H. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics 6 (1985), 4?22. [16] M ENDELSOHN , N. S., AND D ULMAGE , A. L. The convex hull of sub-permutation matrices. Proceedings of the American Mathematical Society 9, 2 (Apr 1958), 253?254. [17] U CHIYA , T., NAKAMURA , A., AND K UDO , M. Algorithms for adversarial bandit problems with multiple plays. In ALT (2010), pp. 375?389. [18] WARMUTH , M. K., AND K UZMIN , D. Randomized PCA algorithms with regret bounds that are logarithmic in the dimension. In In Proc. of NIPS (2006). 9
3962 |@word version:5 r:4 p0:12 initial:3 cyclic:1 series:1 recovered:1 com:1 current:1 clara:1 must:1 realistic:1 benign:1 update:3 selected:2 warmuth:5 rsk:3 characterization:1 simpler:1 mathematical:2 along:2 shorthand:1 prove:1 manner:1 expected:6 frequently:1 multi:2 little:1 armed:2 considering:1 clicked:1 competes:1 notation:3 bounded:1 hindsight:6 finding:1 nj:1 every:13 tackle:2 exactly:3 scaled:4 grant:1 omit:2 appear:1 before:1 treat:1 ak:2 oxford:1 lev:2 subscript:1 path:1 lugosi:2 might:1 initialization:5 studied:2 k:1 range:1 ischer:1 averaged:2 chapire:1 practice:2 regret:32 implement:1 elmbold:1 projection:8 confidence:1 get:5 ga:1 selection:3 restriction:1 measurable:1 customer:1 send:2 kale:1 starting:2 convex:23 immediately:1 rule:2 q:13 classic:1 embedding:1 handle:1 coordinate:8 ianchi:3 play:1 user:8 programming:2 us:1 pa:1 role:1 p5:1 solved:1 capture:1 ugosi:2 ensures:1 news:1 sp0:10 ordering:3 reward:18 depend:1 solving:1 algebra:1 upon:2 learner:11 spk:1 compactly:1 easily:1 slate:88 represented:1 describe:1 kp:1 choosing:3 widely:1 otherwise:2 satyen:1 ability:1 fischer:1 online:7 sequence:1 interaction:2 product:2 relevant:1 reyzin:2 exploiting:1 convergence:1 p:2 tions:1 derive:1 rescale:1 ij:4 implemented:1 c:1 implies:1 direction:3 stochastic:6 hull:7 routing:1 material:1 decompose:7 randomization:2 tln:3 akhlin:2 yij:1 exploring:1 exp:2 skale:1 mapping:1 algorithmic:1 omitted:1 akade:1 proc:1 applicable:1 combinatorial:2 robbins:3 largest:2 dani:1 clearly:1 rather:2 pn:2 gatech:1 viz:2 check:1 adversarial:1 inst:1 bandit:27 going:1 selects:2 arg:5 dual:1 theq:1 colt:5 yahoo:4 ussr:1 special:5 initialize:1 equal:2 once:1 construct:2 nicely:1 future:2 simplex:3 others:1 recommend:2 randomly:1 national:1 individual:1 phase:5 n1:1 atlanta:1 mixture:1 kt:7 bregman:4 partial:1 necessary:2 euclidean:1 re:11 desired:1 instance:4 column:6 assignment:1 clipping:1 cost:7 subset:6 entry:5 uniform:2 predictor:1 combined:1 chooses:6 randomized:4 siam:1 physic:1 pool:1 together:1 analogously:2 again:1 reflect:1 cesa:2 choose:8 worse:2 expert:12 american:1 leading:1 rescaling:1 li:7 syst:1 gy:1 unordered:19 sec:4 inc:1 satisfy:2 mp:1 depends:1 ad:10 hedging:1 multiplicative:6 try:1 start:4 sort:1 complicated:1 parallel:1 minimize:3 square:1 who:1 efficiently:1 correspond:2 identify:1 generalize:1 advertising:4 comp:1 cc:1 monitoring:1 randomness:2 suffers:1 definition:1 pp:5 obvious:1 associated:1 attributed:1 mi:1 proved:1 treatment:1 sophisticated:1 auer:5 back:1 appears:2 higher:1 done:2 though:1 furthermore:1 just:2 implicit:1 until:1 hand:1 receives:2 web:1 ponly:1 name:1 verify:1 true:1 concept:1 awerbuch:1 hence:3 alternating:1 deal:2 round:13 self:1 complete:1 performs:1 ranging:1 wise:1 common:1 specialized:1 koolen:2 association:1 multiarmed:2 ai:1 mathematics:2 had:1 access:2 base:1 closest:1 recent:1 certain:3 inequality:3 ensor:1 yi:1 devise:2 shortest:1 advertiser:3 recommended:3 ale:1 afterwards:1 multiple:1 full:1 technical:2 faster:1 exp3:1 kudo:1 lai:2 equally:1 specializing:1 plugging:2 variant:4 essentially:1 metric:1 expectation:4 normalization:1 represent:3 want:1 fellowship:1 separately:1 suffered:1 publisher:2 extra:1 rest:1 ee:1 mw:16 leverage:1 presence:2 revealed:4 recommends:3 switch:1 xj:2 fit:1 gave:2 iterate:1 nonstochastic:2 competing:5 click:5 associating:1 inner:2 idea:4 spi:3 inder:1 motivated:1 pca:1 bartlett:1 york:2 repeatedly:1 action:32 regrett:10 generally:1 santa:1 dark:1 concentrated:1 schapire:3 specifies:4 xij:1 s3:1 key:1 achieving:1 pj:5 asymptotically:1 relaxation:1 sum:7 compete:1 noticing:1 soda:1 decision:1 scaling:3 entirely:1 bound:34 display:5 stipulation:1 adapted:1 azan:2 constraint:2 sake:1 generates:1 min:6 ayes:1 structured:1 according:1 alternate:1 combination:6 beneficial:1 restricted:4 taken:3 ln:30 computationally:1 payment:1 remains:1 turn:1 know:2 apply:2 observe:1 appearing:1 shortly:1 denotes:2 running:1 assumes:1 giving:1 plausibly:1 build:1 especially:1 society:1 skt:3 already:1 usual:1 traditional:3 visiting:1 distance:2 sci:1 polytope:3 leinberg:1 ottucs:2 assuming:1 index:1 balance:1 innovation:1 reund:1 unfortunately:1 robert:1 statement:3 negative:1 stated:3 design:1 policy:41 perform:1 bianchi:2 upper:1 finite:1 displayed:3 truncated:1 arbitrary:2 esa:3 pair:2 required:1 engine:1 nip:1 able:1 adversary:5 below:1 spp:2 program:2 including:1 treated:1 nakamura:2 indicator:3 kivinen:1 arm:4 representing:1 technology:1 lij:5 nice:1 prior:1 literature:1 relative:5 asymptotic:1 embedded:1 loss:22 fully:1 permutation:3 allocation:1 udo:1 foundation:1 incurred:2 pij:2 consistent:1 minp:2 displaying:1 story:1 playing:1 pi:4 row:4 succinctly:1 supported:1 repeat:1 allow:2 wide:1 barrier:1 tolerance:1 feedback:1 dimension:1 world:3 adaptive:2 employing:1 observable:1 compact:1 decides:1 p0j:4 recommending:1 xi:1 search:1 sk:6 reality:1 nature:1 learn:1 ca:1 orgy:2 poly:2 sp:1 pk:2 main:4 apr:1 rh:3 s2:1 advice:1 georgia:1 sub:1 position:10 exponential:1 comput:2 clicking:1 advertisement:2 learns:1 rk:3 theorem:10 embed:2 down:6 specific:4 jensen:1 list:1 alt:1 incorporating:1 importance:1 kr:1 conditioned:1 sorting:1 entropy:4 logarithmic:1 simply:3 likely:3 positional:1 ordered:25 tracking:1 recommendation:2 mij:3 corresponds:1 hedge:5 goal:6 psk:1 absence:1 specifically:1 except:1 averaging:1 total:2 concordant:1 select:1 linder:1 formally:1 internal:2 brevity:2 princeton:3 avoiding:1
3,271
3,963
Learning Concept Graphs from Text with Stick-Breaking Priors Padhraic Smyth Department of Computer Science University of California, Irvine Irvine, CA 92607 [email protected] America L. Chambers Department of Computer Science University of California, Irvine Irvine, CA 92697 [email protected] Mark Steyvers Department of Cognitive Science University of California, Irvine Irvine, CA 92697 [email protected] Abstract We present a generative probabilistic model for learning general graph structures, which we term concept graphs, from text. Concept graphs provide a visual summary of the thematic content of a collection of documents?a task that is difficult to accomplish using only keyword search. The proposed model can learn different types of concept graph structures and is capable of utilizing partial prior knowledge about graph structure as well as labeled documents. We describe a generative model that is based on a stick-breaking process for graphs, and a Markov Chain Monte Carlo inference procedure. Experiments on simulated data show that the model can recover known graph structure when learning in both unsupervised and semi-supervised modes. We also show that the proposed model is competitive in terms of empirical log likelihood with existing structure-based topic models (hPAM and hLDA) on real-world text data sets. Finally, we illustrate the application of the model to the problem of updating Wikipedia category graphs. 1 Introduction We present a generative probabilistic model for learning concept graphs from text. We define a concept graph as a rooted, directed graph where the nodes represent thematic units (called concepts) and the edges represent relationships between concepts. Concept graphs are useful for summarizing document collections and providing a visualization of the thematic content and structure of large document sets - a task that is difficult to accomplish using only keyword search. An example of a concept graph is Wikipedia?s category graph1 . Figure 1 shows a small portion of the Wikipedia category graph rooted at the category M ACHINE LEARNING2 . From the graph we can quickly infer that the collection of machine learning articles in Wikipedia focuses primarily on evolutionary algorithms and Markov models with less emphasis on other aspects of machine learning such as Bayesian networks and kernel methods. The problem we address in this paper is that of learning a concept graph given a collection of documents where (optionally) we may have concept labels for the documents and an initial graph structure. In the latter scenario, the task is to identify additional concepts in the corpus that are 1 2 http://en.wikipedia.org/wiki/Category:Main topic classifications As of May 5, 2009 1 Applied Sciences Software Engineering Mathematical Sciences Computer Programming Applied Mathematics Formal Sciences Computing Probability and Statistics Philosophy By field Thought Knowledge Sharing Algorithms Society Cognition Education Computational Statistics Philosophy Of mind Artificial Intelligence Knowledge Statistics Cognitive Science Metaphysics Computer Science Learning Machine learning Figure 1: A portion of the Wikipedia category supergraph for the node M ACHINE LEARNING Machine Learning Bayesian Networks Ensemble Learning Classification Algorithms Genetic Algorithms Evolutionary Algorithms Kernel Methods Genetic Programming Interactive Evolutionary Computation Learning in Computer Vision Markov Models Markov Networks Statistical Natural Language Processing Figure 2: A portion of the Wikipedia category subgraph rooted at the node M ACHINE LEARNING not reflected in the graph or additional relationships between concepts in the corpus (via the cooccurrence of concepts in documents) that are not reflected in the graph. This is particularly suited for document collections like Wikipedia where the set of articles is changing at such a fast rate that an automatic method for updating the concept graph may be preferable to manual editing or re-learning the hierarchy from scratch. The foundation of our approach is latent Dirichlet allocation (LDA) [1]. LDA is a probabilistic model for automatically identifying topics within a document collection where a topic is a probability distribution over words. The standard LDA model does not include any notion of relationships, or dependence, between topics. In contrast, methods such as the hierarchical topic model (hLDA) [2] learn a set of topics in the form of a tree structure. The restriction to tree structures however is not well suited for large document collections like Wikipedia. Figure 1 gives an example of the highly non-tree like nature of the Wikipedia category graph. The hierarchical Pachinko allocation model (hPAM) [3] is able to learn a set of topics arranged in a fixedsized graph with a nonparametric version introduced in [4]. The model we propose in this paper is a simpler alternative to hPAM and nonparametric hPAM that can achieve the same flexibility (i.e. learning arbitrary directed acyclic graphs over a possibly infinite number of nodes) within a simpler probabilistic framework. In addition, our model provides a formal mechanism for utilizing labeled data and existing concept graph structures. Other methods for creating concept graphs include the use of techniques such as hierarchical clustering, pattern mining and formal concept analysis to construct ontologies from document collections [5, 6, 7]. Our approach differs in that we utilize a probabilistic framework which enables us (for example) to make inferences about concepts and documents. Our primary novel contribution is the introduction of a flexible probabilistic framework for learning general graph structures from text that is capable of utilizing both unlabeled documents as well as labeled documents and prior knowledge in the form of existing graph structures. In the next section we introduce the stick-breaking distribution and show how it can be used as a prior for graph structures. We then introduce our generative model and explain how it can be adapted for the case where we have an initial graph structure. We derive collapsed Gibbs? sampling equations for our model and present a series of experiments on simulated and real text data. We compare our performance against hLDA and hPAM as baselines. We conclude with a discussion of the merits and limitations of our approach. 2 2 Stick-breaking Distributions Stick-breaking distributions P(?) are discrete probability distributions of the form: P(?) = ? X ?j ?xj (?) where j=1 ? X ?j = 1, 0 ? ?j ? 1 j=1 and ?xj (?) is the delta function centered at the atom xj . The xj variables are sampled independently from a base distribution H (where H is assumed to be continuous). The stick-breaking weights ?j have the form j?1 Y ? 1 = v1 , ? j = vj (1 ? vk ) for j = 2, 3, . . . , ? k=1 where the vj are independent Beta(?j , ?j ) random variables. Stick-breaking distributions derive their name from the analogy of repeatedly breaking the remainder of a unit-length stick at a randomly chosen breakpoint. See [8] for more details. Unlike the Chinese restaurant process, the stick-breaking process lacks exchangeability. The probability of sampling a particular cluster from P(?) given the sequences {xj } and {vj } is not equal to the probability of sampling the same cluster given a permutation of the sequences {x?(j) } and {v?(j) }. This can be seen in Equation 2 where the probability of sampling xj depends upon the value of the j ? 1 proceeding Beta random variables {v1 , v2 , . . . , vj?1 }. If we fix xj and permute every other atom, then the probability of sampling xj changes: it is now determined by the Beta random variables {v?(1) , v?(2) , . . . , v?(j?1) }. The stick-breaking distribution can be utilized as a prior distribution on graph structures. We construct a prior on graph structures by specifying a distribution at each node (denoted as Pt ) that governs the probability of transitioning from node t to another node in the graph. There is some freedom in choosing Pt ; however we have two constraints. First, making a new transition must have non-zero probability. In Figure 1 it is clear that from M ACHINE L EARNING we should be able to transition to any of its children. However we may discover evidence for passing directly to a leaf node such as S TATISTICAL NATURAL L ANGUAGE P ROCESSING (e.g. if we observe new articles related to statistical natural language processing that do not use Markov models). Second, making a transition to a new node must have non-zero probability. For example, we may observe new articles related to the topic of Bioinformatics. In this case, we want to add a new node to the graph (B IOINFORMATICS) and assign some probability of transitioning to it from other nodes. With these two requirements we can now provide a formal definition for Pt . We begin with an initial graph structure G0 with t = 1 . . . T nodes. For each node t we define a feasible set Ft as the collection of nodes to which t can transition. The feasible set may contain the children of node t or possible child nodes of node t (as discussed above). In general, Ft is some subset of the nodes in G0 . We add a special node called the ?exit node? to Ft . If we sample the exit node then we exit from the graph instead of transitioning forward. We define Pt as a stick-breaking distribution over the finite set of nodes Ft where the remaining probability mass is assigned to an infinite set of new nodes (nodes that exist but have not yet been observed). The exact form of Pt is shown below. Pt (?) = |Ft | X ?tj ?ftj (?) + j=1 ? X ?tj ?xtj (?) j=|Ft |+1 The first |Ft | atoms of the stick-breaking distribution are the feasible nodes ftj ? Ft . The remaining atoms are unidentifiable nodes that have yet to be observed (denoted as xtj for simplicity). This is not yet a working definition unless we explicitly state which nodes are in the set Ft . Our model does not in general assume any specific form for Ft . Instead, the user is free to define it as they like. In our experiments, we first assign each node to a unique depth and then define Ft as any node at the next lower depth. The choice of Ft determines the type of graph structures that can be learned. For the choice of Ft used in this paper, edges that traverse multiple depths are not allowed and edges between nodes at the same depth are not allowed. This prevents cycles from forming and allows inference to be performed in a timely manner. More generally, one could extend the definition of Ft to include any node at a lower depth. 3 1. For node t ? {1, . . . , ?} i. Sample stick-break weights {vtj }|?, ? ? Beta(?, ?) ii. Sample word distribution ?t |? ? Dirichlet(?) 2. For document d ? {1, 2, . . . D} i. Sample a distribution over levels ?d |a, b ? Beta(a,b) ii. Sample path pd ? {Pt }? t=1 iii. For word i ? {1, 2, . . . , Nd } Sample level ld,i ? TruncatedDiscrete(?d ) Generate word xd,i |{pd , ld,i , ?} ? Multinomial(?pd [ldi ] ) Figure 3: Generative process for GraphLDA Due to a lack of exchangeability, we must specify the stick-breaking order of the elements in Ft . Note that despite the order, the elements of Ft always occur before the infinite set of new nodes in the stick-breaking permutation. We use a Metropolis-Hastings sampler proposed by [10] to learn the permutation of feasible nodes with the highest likelihood given the data. 3 Generative Process Figure 3 shows the generative process for our proposed model, which we refer to as GraphLDA. We observe a collection of documents d = 1 . . . D where document d has Nd words. As discussed earlier, each node t is associated with a stick-breaking prior Pt . In addition, we associate with each node a multinomial distribution ?t over words in the fashion of topic models. A two-stage process is used to generate document d. First, a path through the graph is sampled from the stick-breaking distributions. We denote this path as pd . The i + 1st node in the path is sampled from Ppdi (?) which is the stick-breaking distribution at the ith node in the path. This process continues until an exit node is sampled. Then for each word xi a level in the path, ldi , is sampled from a truncated discrete distribution. The word xi is generated by the topic at level ldi of the path pd which we denote as pd [ldi ]. In the case where we observe labeled documents and an initial graph structure the paths for document d is restricted to end at the concept label of document d. One possible option for the length distribution is a multinomial distribution over levels. We take a different approach and instead use a parametric smooth form. The motivation is to constrain the length distribution to have the same general functional form across documents (in contrast to the relatively unconstrained multinomial), but to allow the parameters of the distribution to be documentspecific. We considered two simple options: Geometric and Poisson (both truncated to the number of possible levels). In initial experiments the Geometric performed better than the Poisson, so the Geometric was used in all experiments reported in this paper. If word xdi has level ldi = 0 then the word is generated by the topic at the last node on the path and successive levels correspond to earlier nodes in the path. In the case of labeled documents, this matches our belief that a majority of words in the document should be assigned to the concept label itself. 4 Inference We marginalize over the topic distributions ?t and the stick-breaking weights {vtj }. We use a collapsed Gibbs sampler [9] to infer the path assignment pd for each document, the level distribution parameter ?d for each document, and the level assignment ldi for each word. Of the five hyperparameters in the model, inference is sensitive to the value of ? and ? so we place an Exponential prior on both and use a Metropolis-Hastings sampler to learn the best setting. 4.1 Sampling Paths For each document, we must sample a path pd conditioned on all other paths p?d , the level variables, and the word tokens. We only consider paths whose length is greater than or equal to the maximum 4 level of the words in the document. p(pd |x, l, p?d , ? ) ? p(xd |x?d , l, p) ? p(pd |p?d ) (1) The first term in Equation 1 is the probability of all words in the document given the path pd . We compute this probability by marginalizing over the topic distributions ?t : ! P ?d V Y Y ?(V ? + v Np?d ) ?(? + Npd [l],v ) d [l],v P p(xd |x?d , l, p) = ? ?d ?(V ? + v Npd [l],v ) ?(? + Np [l],v ) v=1 l=1 d We use ?d to denote the length of path pd . The notation Npd [l],v stands for the number of times word type v has been assigned to node pd [l]. The superscript ?d means we first decrement the count Npd [l],v for every word in document d. The second term is the conditional probability of the path pd given all other paths p?d . We present the sampling equation under the assumption that there is a maximum number of nodes M allowed at each level. We first consider the probability of sampling a single edge in the path from a node x to one of its feasible nodes {y1 , y2 , . . . , yM } where the node y1 has the first position in the stickbreaking permutation, y2 has the second position, y3 the third and so on. We denote the number of paths that have gone from x to yi as N(x,yi ) . We denote the number of paths that have gone from x to a node with a strictly higher position in the stick-breaking distribution PM than yi as N(x,>yi ) . That is, N(x,>yi ) = k=i+1 N(x,yk ) . Extending this notation we denote the sum N(x,yi ) + N(x,>yi ) as N(x,?yi ) . The probability of selecting node yi is given by: p(x ? yi | p?d ) = i?1 Y ? + N(x,>yr ) ? + N(x,yi ) ? + ? + N(x,?yi ) r=1 ? + ? + N(x,?yr ) for i = 1 . . . M If ym is the last node with a nonzero count N(x,ym ) and m << M it is convenient to compute the probability of transitioning to yi , for i ? m, and the probability of transitioning to any node higher than ym . The probability of transitioning to a node higher than ym is given by " # M X ? M ?m p(x ? yk |p?d ) = ? 1 ? ?+? k=m+1 ?+N(x,>yr ) r=1 ?+?+N(x,?yr ) . Qm where ? = A similar derivation can be used to compute the probability of sampling a node higher than ym when M is equal to infinity. Now that we have computed the probability of a single edge, we can compute the probability of an entire path pd : p(pd |p?d ) = ?d Y p(pdj ? pd,j+1 |p?d ) j=1 4.2 Sampling Levels For the ith word in the dth document we must sample a level ldi conditioned on all other levels l?di , the document paths, the level parameters ? , and the word tokens. ! ? + Np?di (1 ? ?d )ldi ?d d [ldi ],xdi p(ldi |x, l?di , p, ? ) = ? (1 ? (1 ? ?d )?d +1 ) W ? + Np?di [l ],? d di The first term is the probability of word type xdi given the topic at node pd [ldi ]. The second term is the probability of the level ldi given the level parameter ?d . 4.3 Sampling ? Variables Finally, we must sample the level distribution ?d conditioned on the rest of the level parameters ? ?d , the level variables, and the word tokens. ! ! Nd Y ?da?1 (1 ? ?d )b?1 (1 ? ?d )ldi ?d  (2) p(?d |x, l, p, ? ?d ) = ? (1 ? (1 ? ?d )?d +1 ) B a, b i=1 5 1 1 973 1069 2 1060 973 957 2 4 3 957 9 4 3 3/10 9 486 331 385 524 524 6 5 306 8 7 453 496 278 513 194 5 316 154 2 484 235 384 4/7 274 2 486 283 1 8/4/1 7/10 268 245 1069 973 331 385 6 4 453 524 524 278 513 154 8 7 24 9 10 9/2 957 3 5 306 26 4 8/4 968 512 6/9 275 (b) Learned Graph (0 labeled documents) 3 5/1 4 1 1057 2 5/2 275 20 1 972 682 9 (a) Simulated Graph 2 515 20 7/10 423 10 9 545 6 10 (d) Learned Graph (4000 labeled documents) (c) Learned Graph (250 labeled documents) Figure 4: Learning results with simulated data Due to the normalization constant (1 ? (1 ? ?d )?d +1 ), Equation 2 is not a recognizable probability distribution and we must use rejection sampling. Since the first term in Equation 2 is always less than or equal to 1, the sampling distribution is dominated by a Beta(a, b) distribution. According to the rejection sampling algorithm, we sample a candidate value for ?d from Beta(a, b) and either QNd (1??d )ldi ?d accept with probability i=1 or reject and sample again. (1?(1?? )?d +1 ) d 4.4 Metropolis Hastings for Stick-Breaking Permutations In addition to the Gibbs sampling, we employ a Metropolis Hastings sampler presented in [10] to mix over stick-breaking permutations. Consider a node x with feasible nodes {y1 , y2 , . . . , yM }. We sample two feasible nodes yi and yj from a uniform distribution3 . Assume yi comes before yj in the stick-breaking distribution. Then the probability of swapping the position of nodes yi and yj is given by ) ( N(x,y ) ?1 N(x,yj ) ?1 ? i Y Y ? + ? + N(x,>yj ) + k ? + ? + N(x,>y +k i) ? min 1, ? ? + ? + N(x,>yj ) + k ? + ? + N(x,>y +k i) k=0 k=0 ? where N(x,>y = N(x,>yi ) ? N(x,yj ) . See [10] for a full derivation. After every new path assigni) ment, we propose one swap for each node in the graph. 5 Experiments and Results In this section, we present experiments performed on both simulated and real text data. We compare the performance of GraphLDA against hPAM and hLDA. 5.1 Simulated Text Data In this section, we illustrate how the performance of GraphLDA improves as the fraction of labeled data increases. Figure 4(a) shows a simulated concept graph with 10 nodes drawn according to the 3 In [10] feasible nodes are sampled from the prior probability distribution. However for small values of ? and ? this results in extremely slow mixing. 6 stick-breaking generative process with parameter values ? = .025, ? = 10, ? = 10, a = 2 and b = 5. The vocabulary size is 1, 000 words and we generate 4, 000 documents with 250 words each. Each edge in the graph is labeled with the number of paths that traverse it. Figures 4(b)-(d) show the learned graph structures as the fraction of labeled data increases from 0 labeled and 4, 000 unlabeled documents to all 4, 000 documents being labeled. In addition to labeling the edges, we label each node based upon the similarity of the learned topic at the node to the topics of the original graph structure. The Gibbs sampler is initialized to a root node when there is no labeled data. With labeled data, the Gibbs sampler is initialized with the correct placement of nodes to levels. The sampler does not observe the edge structure of the graph nor the correct number of nodes at each level (i.e. the sampler may add additional nodes). With no labeled data, the sampler is unable to recover the relationship between concepts 8 and 10 (due to the relatively small number of documents that contain words from both concepts). With 250 labeled documents, the sampler is able to learn the correct placement of both nodes 8 and 10 (although the topics contain some noise). 5.2 Wikipedia Articles In this section, we compare the performance of GraphLDA to hPAM and hLDA on a set of 518 machine-learning articles taken from Wikipedia. The input to each model is only the article text. All models are restricted to learning a three-level hierarchical structure. For both GraphLDA and hPAM, the number of nodes at each level was set to 25. For GraphLDA, the parameters were fixed at ? = 1, a = 1 and b = 1. The parameters ? and ? were initialized to 1 and .001 respectively and optimized using a Metropolis Hastings sampler. We used the MALLET toolkit implementation of hPAM4 and hLDA [11]. For hPAM, we used different settings for the topic hyperparameter ? = (.001, .01, .1). For hLDA we set ? = .1 and considered ? = (.1, 1, 10) where ? is the smoothing parameter for the Chinese restaurant process and ? = (.1, 1, 10) where ? is the smoothing over levels in the graph. All models were run for 9, 000 iterations to ensure burn-in and samples were taken every 100 iterations thereafter, for a total of 10, 000 iterations. The performance of each model was evaluated on a hold-out set consisting of 20% of the articles using both empirical likelihood and the left-toright evaluation algorithm (see Sections 4.1 and 4.5 of [12]) which are measures of generalization to unseen data. For both GraphLDA and hLDA we use the distribution over paths that was learned during training to compute the per-word log likelihood. For hPAM we compute the MLE estimate of the Dirichlet hyperparameters for both the distribution over super-topics and the distributions over sub-topics from the training documents. Table 5.2 shows the per-word log-likelihood for each model averaged over the ten samples. GraphLDA is competitive when computing the empirical log likelihood. We speculate that GraphLDA?s lower performance in terms of left-to-right log-likelihood is due to our choice of the geometric distribution over levels (and our choice to position the geometric distribution at the last node of the path) and that a more flexible approach could result in better performance. Table 1: Per-word log likelihood of test documents Model Parameters Empirical LL Left-to-Right LL GraphLDA MH opt. -7.10 ? .003 -7.13 ? .009 ? = .1 -7.36 ? .013 -6.11 ? .007 hPAM ? = .01 -7.33 ? .012 -6.47 ? .012 ? = .001 -7.38 ? .006 -6.71 ? .013 ? = .1, ? = .1 -7.10 ? .004 -6.82 ? .007 ? = .1, ? = 1 -7.09 ? .003 -6.86 ? .006 hLDA ? = .1, ? = 10 -7.08 ? .003 -6.90 ? .008 ? = 1, ? = .1 -7.08 ? .003 -6.83 ? .007 ? = 1, ? = 1 -7.08 ? .002 -6.86 ? .006 ? = 1, ? = 10 -7.06 ? .003 -6.88 ? .008 ? = 10, ? = .1 -7.07 ? .004 -6.81 ? .006 ? = 10, ? = 1 -7.07 ? .003 -6.83? .005 ? = 10, ? = 10 -7.06 ? .003 -6.88 ? .010 7 set data learning concept model network neural neuron cnn function genetic fitness mutation selection solution Markov time probability chain distribution graph Markov network random field evolution evolutionary algorithm individual search word topic language model document variables node network parent Bayesian model multitask inference Bayesian Dirichlet learning data model method kernel model noise algorithm hidden training learning policy decision graph function decision classification class classifier data clustering data principal component kmeans learning dimensionality classification reduction method model selection rbm algorithm feature kernel linear space vector point learning algorithm kernel convex constraint algorithm svm vector problem multiclass classifier boosting ensemble hypothesis margin Figure 5: Wikipedia graph structure with additional machine learning abstracts. The edge widths correspond to the probability of the edge in the graph 5.3 Wikipedia Articles with a Graph Structure In our final experiment we illustrate how GraphLDA can be used to update an existing category graph. We use the aforementioned 518 machine-learning Wikipedia articles, along with their category labels, to learn topic distributions for each node in Figure 1. The sampler is initialized with the correct placement of nodes and each document is initialized to a random path from the root to its category label. After 2, 000 iterations, we fix the path assignments for the Wikipedia articles and introduce a new set of documents. We use a collection of 400 machine learning abstracts from the International Conference on Machine Learning (ICML). We sample paths for the new collection of documents keeping the paths from the Wikipedia articles fixed. The sampler was allowed to add new nodes to each level to explain any new concepts that occurred in the ICML text set. Figure 5 illustrates a portion of the final graph structure. The nodes in bold are the original nodes from the Wikipedia category graph. The results show that the model is capable of augmenting an existing concept graph with new concepts (e.g. clustering, support vector machines (SVMs), etc.) and learning meaningful relationships (e.g. boosting/ensembles are on the same path as the concepts for SVMs and neural networks). 6 Discussion and Conclusion Motivated by the increasing availability of large-scale structured collections of documents such as Wikipedia, we have presented a flexible non-parametric Bayesian framework for learning concept graphs from text. The proposed approach can combine unlabeled data with prior knowledge in the form of labeled documents and existing graph structures. Extensions such as allowing the model to handle multiple paths per document are likely to be worth pursuing. In this paper we did not discuss scalability to large graphs which is likely to be an important issue in practice. Computing the probability of every path during sampling, where the number of graphs is a product over the number of nodes at each level, is a computational bottleneck in the current inference algorithm and will not scale. Approximate inference methods that can address this issue should be quite useful in this context. 7 Acknowledgements This material is based upon work supported in part by the National Science Foundation under Award Number IIS-0083489, by a Microsoft Scholarship (AC), and by a Google Faculty Research award (PS). The authors would also like to thank Ian Porteous and Alex Ihler for useful discussions. 4 MALLET implements the ?exit node? version of hPAM 8 References [1] David Blei, Andrew Ng, and Michael Jordan. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, 2003. [2] David M. Blei, Thomas L. Griffiths, and Michael I. Jordan. The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. Journal of the Acm, 57, 2010. [3] David Mimno, Wei Li, and Andrew McCallum. mixtures of hierarchical topics with pachinko allocation. In Proceedings of the 21st Intl. Conf. on Machine Learning, 2007. [4] Wei Li, David Blei, and Andrew McCallum. Nonparametric bayes pachinko allocation. In Proceedings of the Twenty-Third Annual Conference on Uncertainty in Artificial Intelligence (UAI-07), pages 243?250, 2007. [5] Blaz Fortuna, Marko Grobelnki, and Dunja Mladenic. Ontogen: Semi-automatic ontology editor. In Proceedings of theHuman Computer Interaction International Conference, volume 4558, pages 309?318, 2007. [6] S. Bloehdorn, P. Cimiano, and A. Hotho. Learning ontologies to improve text clustering and classification. In From Data and Inf. Analysis to Know. Eng.: Proc. of the 29th Annual Conf. the German Classification Society (GfKl ?05), volume 30 of Studies in Classification, Data Analysis and Know. Org., pages 334?341. Springer, Feb. 2005. [7] P. Cimiano, A. Hotho, and S. Staab. Learning concept hierarchies from text using formal concept analysis. J. Artificial Intelligence Research (JAIR), 24:305?339, 2005. [8] Hemant Ishwaran and Lancelot F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 96(453):161?173, March 2001. [9] Tom Griffiths and Mark Steyvers. Finding scientific topics. Proceedings of the Natl. Academy of the Sciences of the U.S.A., 101 Suppl 1:5228?5235, 2004. [10] Ian Porteous, Alex Ihler, Padhraic Smyth, and Max Welling. Gibbs sampling for coupled infinite mixture models in the stick-breaking representation. In Proceedings of UAI 2006, pages 385?392, July 2006. [11] Andrew Kachites McCallum. http://mallet.cs.umass.edu, 2002. Mallet: A machine learning for language toolkit. [12] Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. Evaluation methods for topic models. In Proceedings of the 26th Intl. Conf. on Machine Learning (ICML 2009), 2009. 9
3963 |@word multitask:1 cnn:1 faculty:1 version:2 nd:3 eng:1 ld:2 reduction:1 initial:5 series:1 uma:1 selecting:1 genetic:3 document:49 existing:6 current:1 yet:3 must:7 enables:1 update:1 generative:8 intelligence:3 leaf:1 yr:4 mccallum:3 ith:2 blei:3 provides:1 boosting:2 node:77 traverse:2 successive:1 org:2 simpler:2 five:1 mathematical:1 along:1 supergraph:1 beta:7 combine:1 recognizable:1 manner:1 introduce:3 ontology:3 metaphysics:1 nor:1 salakhutdinov:1 automatically:1 increasing:1 begin:1 discover:1 notation:2 mass:1 finding:1 every:5 y3:1 xd:3 interactive:1 preferable:1 qm:1 classifier:2 stick:26 unit:2 before:2 engineering:1 hemant:1 despite:1 path:35 burn:1 emphasis:1 wallach:1 specifying:1 gone:2 averaged:1 directed:2 unique:1 yj:7 practice:1 implement:1 differs:1 procedure:1 lancelot:1 empirical:4 thought:1 reject:1 convenient:1 word:28 griffith:2 qnd:1 graph1:1 unlabeled:3 marginalize:1 selection:2 collapsed:2 context:1 restriction:1 independently:1 convex:1 simplicity:1 identifying:1 iain:1 utilizing:3 steyvers:3 handle:1 notion:1 hierarchy:3 pt:8 user:1 smyth:3 programming:2 exact:1 hypothesis:1 associate:1 element:2 particularly:1 updating:2 utilized:1 continues:1 labeled:18 observed:2 ft:16 cycle:1 keyword:2 highest:1 yk:2 pd:18 pdj:1 cooccurrence:1 upon:3 exit:5 swap:1 mh:1 america:1 derivation:2 fast:1 describe:1 monte:1 artificial:3 labeling:1 choosing:1 whose:1 npd:4 quite:1 statistic:3 unseen:1 itself:1 superscript:1 final:2 sequence:2 vtj:2 propose:2 ment:1 interaction:1 product:1 remainder:1 uci:3 subgraph:1 flexibility:1 achieve:1 mixing:1 academy:1 scalability:1 parent:1 cluster:2 requirement:1 extending:1 p:1 intl:2 distribution3:1 illustrate:3 derive:2 ac:1 augmenting:1 andrew:4 ldi:14 c:1 come:1 correct:4 centered:1 material:1 education:1 assign:2 fix:2 generalization:1 opt:1 strictly:1 extension:1 hold:1 considered:2 ic:2 cognition:1 ruslan:1 proc:1 label:6 sensitive:1 stickbreaking:1 always:2 super:1 exchangeability:2 focus:1 vk:1 likelihood:8 contrast:2 baseline:1 summarizing:1 inference:9 entire:1 accept:1 hidden:1 issue:2 classification:7 flexible:3 aforementioned:1 denoted:2 smoothing:2 special:1 field:2 construct:2 equal:4 ng:1 sampling:18 atom:4 hotho:2 unsupervised:1 icml:3 breakpoint:1 np:4 primarily:1 employ:1 randomly:1 national:1 individual:1 xtj:2 fitness:1 consisting:1 microsoft:1 freedom:1 highly:1 mining:1 evaluation:2 mladenic:1 mixture:2 swapping:1 tj:2 natl:1 chain:2 edge:10 capable:3 partial:1 unless:1 tree:3 initialized:5 re:1 earlier:2 assignment:3 subset:1 uniform:1 xdi:3 reported:1 accomplish:2 st:2 international:2 probabilistic:6 michael:2 ym:7 quickly:1 again:1 padhraic:2 possibly:1 cognitive:2 creating:1 conf:3 american:1 li:2 speculate:1 bold:1 availability:1 explicitly:1 depends:1 performed:3 break:1 root:2 portion:4 competitive:2 recover:2 option:2 bayes:1 timely:1 mutation:1 contribution:1 ensemble:3 correspond:2 identify:1 bayesian:6 carlo:1 worth:1 explain:2 sharing:1 manual:1 definition:3 against:2 james:1 associated:1 di:5 rbm:1 ihler:2 irvine:6 sampled:6 knowledge:5 improves:1 dimensionality:1 higher:4 jair:1 supervised:1 reflected:2 specify:1 wei:2 tom:1 editing:1 arranged:1 tatistical:1 unidentifiable:1 evaluated:1 stage:1 until:1 working:1 hastings:5 lack:2 google:1 mode:1 lda:3 scientific:1 name:1 concept:33 contain:3 y2:3 evolution:1 assigned:3 nonzero:1 ll:2 during:2 width:1 marko:1 rooted:3 mallet:4 novel:1 wikipedia:19 multinomial:4 functional:1 volume:2 discussed:2 extend:1 occurred:1 association:1 refer:1 gibbs:7 automatic:2 unconstrained:1 mathematics:1 pm:1 language:4 toolkit:2 similarity:1 etc:1 base:1 add:4 feb:1 inf:1 scenario:1 yi:17 seen:1 additional:4 greater:1 july:1 semi:2 ii:3 full:1 multiple:2 mix:1 infer:2 smooth:1 match:1 mle:1 award:2 vision:1 poisson:2 iteration:4 represent:2 kernel:5 normalization:1 suppl:1 addition:4 want:1 rest:1 unlike:1 jordan:2 iii:1 xj:8 restaurant:3 multiclass:1 bottleneck:1 motivated:1 passing:1 repeatedly:1 useful:3 generally:1 governs:1 clear:1 nonparametric:4 ten:1 svms:2 category:12 http:2 wiki:1 generate:3 exist:1 delta:1 per:4 discrete:2 hyperparameter:1 thereafter:1 drawn:1 changing:1 utilize:1 v1:2 graph:63 fraction:2 sum:1 run:1 uncertainty:1 place:1 pursuing:1 earning:1 decision:2 toright:1 annual:2 adapted:1 occur:1 placement:3 constraint:2 infinity:1 constrain:1 alex:2 software:1 dominated:1 aspect:1 min:1 extremely:1 relatively:2 department:3 structured:1 according:2 march:1 across:1 metropolis:5 making:2 restricted:2 taken:2 equation:6 visualization:1 discus:1 count:2 mechanism:1 german:1 mind:1 merit:1 know:2 end:1 ishwaran:1 observe:5 hierarchical:5 v2:1 chamber:1 alternative:1 original:2 thomas:1 clustering:4 dirichlet:5 include:3 remaining:2 ensure:1 porteous:2 scholarship:1 chinese:3 murray:1 society:2 g0:2 rocessing:1 parametric:2 primary:1 dependence:1 evolutionary:4 unable:1 thank:1 simulated:7 majority:1 topic:27 length:5 relationship:5 providing:1 optionally:1 difficult:2 implementation:1 policy:1 twenty:1 allowing:1 neuron:1 markov:7 finite:1 truncated:2 y1:3 arbitrary:1 introduced:1 david:5 optimized:1 california:3 learned:7 address:2 able:3 dth:1 below:1 pattern:1 ftj:2 max:1 belief:1 natural:3 improve:1 coupled:1 text:13 prior:11 geometric:5 acknowledgement:1 marginalizing:1 permutation:6 limitation:1 allocation:5 acyclic:1 analogy:1 foundation:2 article:12 editor:1 summary:1 token:3 supported:1 last:3 free:1 keeping:1 formal:5 allow:1 documentspecific:1 mimno:2 depth:5 vocabulary:1 world:1 transition:4 stand:1 pachinko:3 forward:1 collection:13 author:1 welling:1 approximate:1 uai:2 corpus:2 conclude:1 assumed:1 xi:2 hlda:9 search:3 latent:2 continuous:1 table:2 learn:7 nature:1 ca:3 permute:1 hanna:1 vj:4 da:1 did:1 main:1 decrement:1 motivation:1 noise:2 hyperparameters:2 child:3 allowed:4 en:1 fashion:1 slow:1 sub:1 position:5 thematic:3 dunja:1 exponential:1 candidate:1 kachites:1 breaking:25 third:2 ian:2 transitioning:6 specific:1 svm:1 evidence:1 conditioned:3 illustrates:1 margin:1 rejection:2 suited:2 likely:2 forming:1 visual:1 prevents:1 springer:1 nested:1 determines:1 acm:1 conditional:1 kmeans:1 content:2 change:1 feasible:8 infinite:4 determined:1 sampler:13 principal:1 called:2 total:1 meaningful:1 mark:3 support:1 latter:1 bioinformatics:1 philosophy:2 scratch:1
3,272
3,964
Double Q-learning Hado van Hasselt Multi-agent and Adaptive Computation Group Centrum Wiskunde & Informatica Abstract In some stochastic environments the well-known reinforcement learning algorithm Q-learning performs very poorly. This poor performance is caused by large overestimations of action values. These overestimations result from a positive bias that is introduced because Q-learning uses the maximum action value as an approximation for the maximum expected action value. We introduce an alternative way to approximate the maximum expected value for any set of random variables. The obtained double estimator method is shown to sometimes underestimate rather than overestimate the maximum expected value. We apply the double estimator to Q-learning to construct Double Q-learning, a new off-policy reinforcement learning algorithm. We show the new algorithm converges to the optimal policy and that it performs well in some settings in which Q-learning performs poorly due to its overestimation. 1 Introduction Q-learning is a popular reinforcement learning algorithm that was proposed by Watkins [1] and can be used to optimally solve Markov Decision Processes (MDPs) [2]. We show that Q-learning?s performance can be poor in stochastic MDPs because of large overestimations of the action values. We discuss why this occurs and propose an algorithm called Double Q-learning to avoid this overestimation. The update of Q-learning is   Qt+1 (st , at ) = Qt (st , at ) + ?t (st , at ) rt + ? max Qt (st+1 , a) ? Qt (st , at ) . (1) a In this equation, Qt (s, a) gives the value of the action a in state s at time t. The reward rt is drawn s? from a fixed reward distribution R : S ?A?S ? R, where E{rt |(s, a, s? ) = (st , at , st+1 )} = Rsa . The next state st+1 is determined by a fixed state transition distribution P : S ? A ? S ? [0, 1], P s? s? where Psa gives the probability of ending up in state s? after performing a in s, and s? Psa = 1. The learning rate ?t (s, a) ? [0, 1] ensures that the update averages over possible randomness in the rewards and transitions in order to converge in the limit to the optimal action value function. This optimal value function is the solution to the following set of equations [3]:  X ? ? s s Rsa + ? max Q? (s? , a) . (2) Psa ?s, a : Q? (s, a) = a s? The discount factor ? ? [0, 1) has two interpretations. First, it can be seen as a property of the problem that is to be solved, weighing immediate rewards more heavily than later rewards. Second, in non-episodic tasks, the discount factor makes sure that every action value is finite and therefore welldefined. It has been proven that Q-learning reaches the optimal value function Q? with probability one in the limit under some mild conditions on the learning rates and exploration policy [4?6]. Q-learning has been used to find solutions on many problems [7?9] and was an inspiration to similar algorithms, such as Delayed Q-learning [10], Phased Q-learning [11] and Fitted Q-iteration [12], to name some. These variations have mostly been proposed in order to speed up convergence rates 1 compared to the original algorithm. The convergence rate of Q-learning can be exponential in the number of experiences [13], although this is dependent on the learning rates and with a proper choice of learning rates convergence in polynomial time can be obtained [14]. The variants named above can also claim polynomial time convergence. Contributions An important aspect of the Q-learning algorithm has been overlooked in previous work: the use of the max operator to determine the value of the next state can cause large overestimations of the action values. We show that Q-learning can suffer a large performance penalty because of a positive bias that results from using the maximum value as approximation for the maximum expected value. We propose an alternative double estimator method to find an estimate for the maximum value of a set of stochastic values and we show that this sometimes underestimates rather than overestimates the maximum expected value. We use this to construct the new Double Q-learning algorithm. The paper is organized as follows. In the second section, we analyze two methods to approximate the maximum expected value of a set of random variables. In Section 3 we present the Double Q-learning algorithm that extends our analysis in Section 2 and avoids overestimations. The new algorithm is proven to converge to the optimal solution in the limit. In Section 4 we show the results on some experiments to compare these algorithms. Some general discussion is presented in Section 5 and Section 6 concludes the paper with some pointers to future work. 2 Estimating the Maximum Expected Value In this section, we analyze two methods to find an approximation for the maximum expected value of a set of random variables. The single estimator method uses the maximum of a set of estimators as an approximation. This approach to approximate the value of the maximum expected value is positively biased, as discussed in previous work in economics [15] and decision making [16]. It is a bias related to the Winner?s Curse in auctions [17, 18] and it can be shown to follow from Jensen?s inequality [19]. The double estimator method uses two estimates for each variable and uncouples the selection of an estimator and its value. We are unaware of previous work that discusses it. We analyze this method and show that it can have a negative bias. Consider a set of M random variables X = {X1 , . . . , XM }. In many problems, one is interested in the maximum expected value of the variables in such a set: max E{Xi } . (3) i Without knowledge of the functional form and parameters of the underlying distributions of the variables in X, it is impossible to determine (3) exactly. Most often, this value is approximated SM by constructing approximations for E{Xi } for all i. Let S = i=1 Si denote a set of samples, where Si is the subset containing samples for the variable Xi . We assume that the samples in Si are independent and identically distributed (iid). Unbiased estimates for the expected values can def be obtained by computing the sample average for each variable: E{Xi } = E{?i } ? ?i (S) = P 1 s?Si s, where ?i is an estimator for variable Xi . This approximation is unbiased since every |Si | sample s ? Si is an unbiased estimate for the value of E{Xi }. The error in the approximation thus consists solely of the variance in the estimator and decreases when we obtain more samples. th We use the following R x notations: fi denotes the probability density function (PDF) of the i variable Xi and Fi (x) = ?? fi (x) dx is the cumulative distribution function (CDF) of this PDF. Similarly, the PDF and CDF of the ith estimator are denoted fi? and Fi? . The maximum expected value can R? be expressed in terms of the underlying PDFs as maxi E{Xi } = maxi ?? x fi (x) dx . 2.1 The Single Estimator An obvious way to approximate the value in (3) is to use the value of the maximal estimator: max E{Xi } = max E{?i } ? max ?i (S) . i i i (4) Because we contrast this method later with a method that uses two estimators for each variable, we call this method the single estimator. Q-learning uses this method to approximate the value of the next state by maximizing over the estimated action values in that state. 2 ? The maximal estimator maxi ?i is distributed according to some PDF fmax that is dependent on the ? PDFs of the estimators fi? . To determine this PDF, consider the CDF Fmax (x), which gives the probability that the maximum estimate is lower or equal to x. This probability is equal to the probability QM def def ? that all the estimates are lower or equal to x: Fmax (x) = P (maxi ?i ? x) = i=1 P (?i ? x) = R QM ? ? ? i=1 Fi (x). The value maxi ?i (S) is an unbiased estimate for E{maxj ?j } = ?? x fmax (x) dx , which can thus be given by E{max ?j } = j Z M M ? x ?? X d Y ? Fi (x) dx = dx i=1 j Z ? ?? x fj? (s) M Y Fi? (x) dx . (5) i6=j However, in (3) the order of the max operator and the expectation operator is the other way around. This makes the maximal estimator maxi ?i (S) a biased estimate for maxi E{Xi }. This result has been proven in previous work [16]. A generalization of this proof is included in the supplementary material accompanying this paper. 2.2 The Double Estimator The overestimation that results from the single estimator approach can have a large negative impact on algorithms that use this method, such as Q-learning. Therefore, we look at an alternative method to approximate maxi E{Xi }. We refer to this method as the double estimator, since it uses two sets B A B B of estimators: ?A = {?A 1 , . . . , ?M } and ? = {?1 , . . . , ?M }. A B Both sets of estimators are updatedP with a subset of the samples we P draw, such that S = S ?S and 1 1 A B A B S ? S = ? and ?i (S) = |S A | s?S A s and ?i (S) = |S B | s?S B s. Like the single estimator i i i i B ?i , both ?A i and ?i are unbiased if we assume that the samples are split in a proper manner, for def  A instance randomly, over the two sets of estimators. Let M axA (S) = j | ?A j (S) = maxi ?i (S) be the set of maximal estimates in ?A (S). Since ?B is an independent, unbiased set of estimators, A ? we have E{?B j } = E{Xj } for all j, including all j ? M ax . Let a be an estimator that maximizes def A A ?A : ?A a? (S) = maxi ?i (S). If there are multiple estimators that maximize ? , we can for instance B B pick one at random. Then we can use ?a? as an estimate for maxi E{?i } and therefore also for maxi E{Xi } and we obtain the approximation B max E{Xi } = max E{?B i } ? ? a? . i (6) i B As we gain more samples the variance of the estimators decreases. In the limit, ?A i (S) = ?i (S) = E{Xi } for all i and the approximation in (6) converges to the correct result. Assume that the underlying PDFs are continuous. The probability P (j = a? ) for any j is then equal to the probability that all i 6= j give lower estimates. Thus ?A j (S) = x is maximal for some R? QM A value x with probability i6=j P (?i < x). Integrating out x gives P (j = a? ) = ?? P (?A j = QM QM def R ? A A A A A x) i6=j P (?i < x) dx = ?? fj (x) i6=j Fi (x) dx , where fi and Fi are the PDF and CDF of ?A i . The expected value of the approximation by the double estimator can thus be given by M X j P (j = a ? )E{?B j } = M X E{?B j } j Z ? ?? fjA (x) M Y FiA (x) dx . (7) i6=j For discrete PDFs the probability that two or more estimators are equal should be taken into account and the integrals should be replaced with sums. These changes are straightforward. Comparing (7) to (5), we see the difference is that the double estimator uses E{?B j } in place of x. The single estimator overestimates,Qbecause x is within integral and therefore correlates with the monotonically increasing product i6=j Fi? (x). The double estimator underestimates because the probabilities P (j = a? ) sum to one and therefore the approximation is a weighted estimate of unbiased expected values, which must be lower or equal to the maximum expected value. In the following lemma, which holds in both the discrete and the continuous case, we prove in general that the estimate E{?B a? } is not an unbiased estimate of maxi E{Xi }. 3 A Lemma 1. Let X = {X1 , . . . , XM } be a set of random variables and let ?A = {?A 1 , . . ., ?M } and B A B , . . . , ? } be two sets of unbiased estimators such that E{? } = E{? } = E{X ?B = {?B i }, 1 i i M def for all i. Let M = {j | E{Xj } = maxi E{Xi }} be the set of elements that maximize the expected A B values. Let a? be an element that maximizes ?A : ?A a? = maxi ?i . Then E{?a? } = E{Xa? } ? ? maxi E{Xi }. Furthermore, the inequality is strict if and only if P (a ? / M) > 0. def ? / M and Proof. Assume a? ? M. Then E{?B a? } = E{Xa? } = maxi E{Xi }. Now assume a ? def choose j ? M. Then E{?B a? } = E{Xa? } < E{Xj } = maxi E{Xi }. These two possibilities are mutually exclusive, so the combined expectation can be expressed as ? B ? ? ? E{?B / M)E{?B / M} a? } = P (a ? M)E{?a? |a ? M} + P (a ? a? |a ? ? = P (a? ? M) max E{Xi } + P (a? ? / M)E{?B / M} a? |a ? i ? P (a? ? M) max E{Xi } + P (a? ? / M) max E{Xi } i i = max E{Xi } , i where the inequality is strict if and only if P (a? ? / M) > 0. This happens when the variables have different expected values, but their distributions overlap. In contrast with the single estimator, the double estimator is unbiased when the variables are iid, since then all expected values are equal and P (a? ? M) = 1. 3 Double Q-learning We can interpret Q-learning as using the single estimator to estimate the value of the next state: maxa Qt (st+1 , a) is an estimate for E{maxa Qt (st+1 , a)}, which in turn approximates maxa E{Qt (st+1 , a)}. The expectation should be understood as averaging over all possible runs of the same experiment and not?as it is often used in a reinforcement learning context?as the expectation over the next state, which we will encounter in the next subsection as E{?|Pt }. Therefore, maxa Qt (st+1 , a) is an unbiased sample, drawn from an iid distribution with mean E{maxa Qt (st+1 , a)}. In the next section we show empirically that because of this Q-learning can indeed suffer from large overestimations. In this section we present an algorithm to avoid these overestimation issues. The algorithm is called Double Q-learning and is shown in Algorithm 1. Double Q-learning stores two Q functions: QA and QB . Each Q function is updated with a value from the other Q function for the next state. The action a? in line 6 is the maximal valued action in state s? , according to the value function QA . However, instead of using the value QA (s? , a? ) = maxa QA (s? , a) to update QA , as Q-learning would do, we use the value QB (s? , a? ). Since QB was updated on the same problem, but with a different set of experience samples, this can be considered an unbiased estimate for the value of this action. A similar update is used for QB , using b? and QA . It is important that both Q functions learn from separate sets of experiences, but to select an action to perform one can use both value functions. Therefore, this algorithm is not less data-efficient than Q-learning. In our experiments, we calculated the average of the two Q values for each action and then performed ?-greedy exploration with the resulting average Q values. Double Q-learning is not a full solution to the problem of finding the maximum of the expected values of the actions. Similar to the double estimator in Section 2, action a? may not be the action that maximizes the expected Q function maxa E{QA (s? , a)}. In general E{QB (s? , a? )} ? maxa E{QA (s? , a? )}, and underestimations of the action values can occur. 3.1 Convergence in the Limit In this subsection we show that in the limit Double Q-learning converges to the optimal policy. Intuitively, this is what one would expect: Q-learning is based on the single estimator and Double Q-learning is based on the double estimator and in Section 2 we argued that the estimates by the single and double estimator both converge to the same answer in the limit. However, this argument does not transfer immediately to bootstrapping action values, so we prove this result making use of the following lemma which was also used to prove convergence of Sarsa [20]. 4 Algorithm 1 Double Q-learning 1: Initialize QA ,QB ,s 2: repeat 3: Choose a, based on QA (s, ?) and QB (s, ?), observe r, s? 4: Choose (e.g. random) either UPDATE(A) or UPDATE(B) 5: if UPDATE(A) then 6: Define a? = arg maxa QA (s? , a)  7: QA (s, a) ? QA (s, a) + ?(s, a) r + ?QB (s? , a? ) ? QA (s, a) 8: else if UPDATE(B) then 9: Define b? = arg maxa QB (s? , a) 10: QB (s, a) ? QB (s, a) + ?(s, a)(r + ?QA (s? , b? ) ? QB (s, a)) 11: end if 12: s ? s? 13: until end Lemma 2. Consider a stochastic process (?t , ?t , Ft ), t ? 0, where ?t , ?t , Ft : X ? R satisfy the equations: ?t+1 (xt ) = (1 ? ?t (xt ))?t (xt ) + ?t (xt )Ft (xt ) , (8) where xt ? X and t = 0, 1, 2, . . .. Let Pt be a sequence of increasing ?-fields such that ?0 and ?0 are P0 -measurable and ?t , ?t and Ft?1 are Pt -measurable, t = 1,P 2, . . . . Assume that the P following hold: 1) The set X is finite. 2) ?t (xt ) ? [0, 1] , t ?t (xt ) = ? , t (?t (xt ))2 < ? w.p.1 and ?x 6= xt : ?t (x) = 0. 3) ||E{Ft |Pt }|| ? ?||?t || + ct , where ? ? [0, 1) and ct converges to zero w.p. 1. 4) Var{Ft (xt )|Pt } ? K(1 + ?||?t ||)2 , where K is some constant. Here || ? || denotes a maximum norm. Then ?t converges to zero with probability one. We use this lemma to prove convergence of Double Q-learning under similar conditions as Qlearning. Our theorem is as follows: Theorem 1. Assume the conditions below are fulfilled. Then, in a given ergodic MDP, both QA and QB as updated by Double Q-learning as described in Algorithm 1 will converge to the optimal value function Q? as given in the Bellman optimality equation (2) with probability one if an infinite number of experiences in the form of rewards and state transitions for each state action pair are given by a proper learning policy. The additional conditions are: 1) The MDP is finite, i.e. |S ? A| < ?. 2) ? ? [0, 1). 3) The Q values are stored in P a lookup table. 4)PBoth QA and QB receive an infinite number of updates. 5) ?t (s, a) ? [0, 1], t ?t (s, a) = ?, t (?t (s, a))2 < ? w.p.1, and s? } < ?. ?(s, a) 6= (st , at ) : ?t (s, a) = 0. 6) ?s, a, s? : Var{Rsa A ?proper? learning policy ensures that each state action pair is visited an infinite number of times. For instance, in a communicating MDP proper policies include a random policy. Sketch of the proof. We sketch how to apply Lemma 2 to prove Theorem 1 without going into full technical detail. Because of the symmetry in the updates on the functions QA and QB it suffices B to show convergence for either of these. We will apply Lemma 2 with Pt = {QA 0 , Q0 , s 0 , a0 , ?0 , A ? B r1 , s1 , . . ., st , at }, X = S ? A, ?t = Qt ? Q , ? = ? and Ft (st , at ) = rt + ?Qt (st+1 , a? ) ? Q?t (st , at ), where a? = arg maxa QA (st+1 , a). It is straightforward to show the first two conditions of the lemma hold. The fourth condition of the lemma holds as a consequence of the boundedness condition on the variance of the rewards in the theorem. This leaves to show that the third condition on the expected contraction of Ft holds. We can write  ? A ? Ft (st , at ) = FtQ (st , at ) + ? QB , t (st+1 , a ) ? Qt (st+1 , a ) ? ? where FtQ = rt + ?QA t (st+1 , a ) ? Qt (st , at ) is the value of Ft if normal Q-learning would be under consideration. It is well-known that E{FtQ |Pt } ? ?||?t ||, so to apply the lemma we identify A ? BA ? A = QB ct = ?QB t ? Qt converges to t (st+1 , a ) ? ?Qt (st+1 , a ) and it suffices to show that ?t B A BA zero. Depending on whether Q or Q is updated, the update of ?t at time t is either BA B ?BA t+1 (st , at ) = ?t (st , at ) + ?t (st , at )Ft (st , at ) , or BA A ?BA t+1 (st , at ) = ?t (st , at ) ? ?t (st , at )Ft (st , at ) , 5 ? A B A ? where FtA (st , at ) = rt + ?QB t (st+1 , a ) ? Qt (st , at ) and Ft (st , at ) = rt + ?Qt (st+1 , b ) ? 1 BA QB (s , a ). We define ? = ? . Then t t t t 2 t BA B A E{?BA t+1 (st , at )|Pt } = ?t (st , at ) + E{?t (st , at )Ft (st , at ) ? ?t (st , at )Ft (st , at )|Pt } BA (st , at )E{FtBA (st , at )|Pt } , = (1 ? ?tBA (st , at ))?BA t (st , at ) + ?t  ? ? B where E{FtBA (st , at )|Pt } = ?E QA t (st+1 , b ) ? Qt (st+1 , a )|Pt . For this step it is important that the selection whether to update QA or QB is independent on the sample (e.g. random). ? ? ? B Assume E{QA t (st+1 , b )|Pt } ? E{Qt (st+1 , a )|Pt }. By definition of a as given in line 6 of A ? A A ? Algorithm 1 we have Qt (st+1 , a ) = maxa Qt (st+1 , a) ? Qt (st+1 , b ) and therefore  ? B ? E{FtBA (st , at )|Pt } = ?E QA t (st+1 , b ) ? Qt (st+1 , a )|Pt BA  ? B ? . ? ?E QA t (st+1 , a ) ? Qt (st+1 , a )|Pt ? ? ?t ? A ? ? Now assume E{QB t (st+1 , a )|Pt } > E{Qt (st+1 , b )|Pt } and note that by definition of b we have ? ? B (s , a ). Then (s , b ) ? Q QB t+1 t+1 t t  ? A ? E{FtBA (st , at )|Pt } = ?E QB t (st+1 , a ) ? Qt (st+1 , b )|Pt BA  ? A ? . ? ?E QB t (st+1 , b ) ? Qt (st+1 , b )|Pt ? ? ?t Clearly, one of the two assumptions must hold at each time step and in both cases we obtain the BA to desired result that |E{FtBA |Pt }| ? ?k?BA t k. Applying the lemma yields convergence of ?t zero, which in turn ensures that the original process also converges in the limit. 4 Experiments This section contains results on two problems, as an illustration of the bias of Q-learning and as a first practical comparison with Double Q-learning. The settings are simple to allow an easy interpretation of what is happening. Double Q-learning scales to larger problems and continuous spaces in the same way as Q-learning, so our focus here is explicitly on the bias of the algorithms. The settings are the gambling game of roulette and a small grid world. There is considerable randomness in the rewards, and as a result we will see that indeed Q-learning performs poorly. The discount factor was 0.95 in all experiments. We conducted two experiments on each problem. The learning rate was either linear: ?t (s, a) = 1/nt (s, a), or polynomial ?t (s, a) = 1/nt (s, a)0.8 . For A B B Double Q-learning nt (s, a) = nA t (s, a) if Q is updated and nt (s, a) = nt (s, a) if Q is updated, B A where nt and nt store the number of updates for each action for the corresponding value function. The polynomial learning rate was shown in previous work to be better in theory and in practice [14]. 4.1 Roulette In roulette, a player chooses between 170 betting actions, including betting on a number, on either of the colors black or red, and so on. The payoff for each of these bets is chosen such that almost all 1 bets have an expected payout of 38 $36 = $0.947 per dollar, resulting in an expected loss of -$0.053 per play if we assume the player bets $1 every time.1 We assume all betting actions transition back to the same state and there is one action that stops playing, yielding $0. We ignore the available funds of the player as a factor and assume he bets $1 each turn. Figure 1 shows the mean action values over all actions, as found by Q-learning and Double Qlearning. Each trial consisted of a synchronous update of all 171 actions. After 100,000 trials, Q-learning with a linear learning rate values all betting actions at more than $20 and there is little progress. With polynomial learning rates the performance improves, but Double Q-learning converges much more quickly. The average estimates of Q-learning are not poor because of a few poorly estimated outliers. After 100,000 trials Q-learning valued all non-terminating actions between $22.63 and $22.67 for linear learning rates and between $9.58 to $9.64 for polynomial rates. In this setting Double Q-learning does not suffer from significant underestimations. 1 Only the so called ?top line? which pays $6 per dollar when 00, 0, 1, 2 or 3 is hit has a slightly lower expected value of -$0.079 per dollar. 6 Figure 1: The average action values according to Q-learning and Double Q-learning when playing roulette. The ?walk-away? action is worth $0. Averaged over 10 experiments. Figure 2: Results in the grid world for Q-learning and Double Q-learning. The first row shows average rewards per time step. The second row shows the maximal action value in the starting state S. Averaged over 10,000 experiments. 4.2 Grid World Consider the small grid world MDP as show in Figure 2. Each state has 4 actions, corresponding to the directions the agent can go. The starting state is in the lower left position and the goal state is in the upper right. Each time the agent selects an action that walks off the grid, the agent stays in the same state. Each non-terminating step, the agent receives a random reward of ?12 or +10 with equal probability. In the goal state every action yields +5 and ends an episode. The optimal policy ends an episode after five actions, p so the optimal average reward per step is +0.2. The exploration was ?-greedy with ?(s) = 1/ n(s) where n(s) is the number of times state s has been visited, assuring infinite exploration in the limit which is a theoretical requirement for the convergence of both Q-learning and Double Q-learning. Such an ?-greedy setting is beneficial for Q-learning, since this implies that actions with large overestimations are selected more often than realistically valued actions. This can reduce the overestimation. Figure 2 shows the average rewards in the first row and the maximum action value in the starting state in the second row. Double Q-learning performs much better in terms of its average rewards, but this does not imply that the estimations of the action values are accurate. The optimal value of P3 the maximally valued action in the starting state is 5? 4 ? k=0 ? k ? 0.36, which is depicted in the second row of Figure 2 with a horizontal dotted line. We see Double Q-learning does not get much closer to this value in 10, 000 learning steps than Q-learning. However, even if the error of the action values is comparable, the policies found by Double Q-learning are clearly much better. 5 Discussion We note an important difference between the well known heuristic exploration technique of optimism in the face of uncertainty [21, 22] and the overestimation bias. Optimism about uncertain events can be beneficial, but Q-learning can overestimate actions that have been tried often and the estimations can be higher than any realistic optimistic estimate. For instance, in roulette our initial action value estimate of $0 can be considered optimistic, since no action has an actual expected value higher than this. However, even after trying 100,000 actions Q-learning on average estimated each gambling action to be worth almost $10. In contrast, although Double Q-learning can underestimate 7 the values of some actions, it is easy to set the initial action values high enough to ensure optimism for actions that have experienced limited updates. Therefore, the use of the technique of optimism in the face of uncertainty can be thought of as an orthogonal concept to the over- and underestimation that is the topic of this paper. The analysis in this paper is not only applicable to Q-learning. For instance, in a recent paper on multi-armed bandit problems, methods were proposed to exploit structure in the form of the presence of clusters of correlated arms in order to speed up convergence and reduce total regret [23]. The value of such a cluster in itself is an estimation task and the proposed methods included taking the mean value, which would result in an underestimation of the actual value, and taking the maximum value, which is a case of the single estimator and results in an overestimation. It would be interesting to see how the double estimator approach fares in such a setting. Although the settings in our experiments used stochastic rewards, our analysis is not limited to MDPs with stochastic reward functions. When the rewards are deterministic but the state transitions are stochastic, the same pattern of overestimations due to this noise can occur and the same conclusions continue to hold. 6 Conclusion We have presented a new algorithm called Double Q-learning that uses a double estimator approach to determine the value of the next state. To our knowledge, this is the first off-policy value based reinforcement learning algorithm that does not have a positive bias in estimating the action values in stochastic environments. According to our analysis, Double Q-learning sometimes underestimates the action values, but does not suffer from the overestimation bias that Q-learning does. In a roulette game and a maze problem, Double Q-learning was shown to reach good performance levels much more quickly. Future work Interesting future work would include research to obtain more insight into the merits of the Double Q-learning algorithm. For instance, some preliminary experiments in the grid world showed that Q-learning performs even worse with higher discount factors, but Double Q-learning is virtually unaffected. Additionally, the fact that we can construct positively biased and negatively biased off-policy algorithms raises the question whether it is also possible to construct an unbiased off-policy reinforcement-learning algorithm, without the high variance of unbiased on-policy Monte-Carlo methods [24]. Possibly, this can be done by estimating the size of the overestimation and deducting this from the estimate. Unfortunately, the size of the overestimation is dependent on the number of actions and the unknown distributions of the rewards and transitions, making this a non-trivial extension. More analysis on the performance of Q-learning and related algorithms such as Fitted Q-iteration [12] and Delayed Q-learning [10] is desirable. For instance, Delayed Q-learning can suffer from similar overestimations, although it does have polynomial convergence guarantees. This is similar to the polynomial learning rates: although performance is improved from an exponential to a polynomial rate [14], the algorithm still suffers from the inherent overestimation bias due to the single estimator approach. Furthermore, it would be interesting to see how Fitted Double Q-iteration, Delayed Double Q-learning and other extensions of Q-learning perform in practice when they are applied to Double Q-learning. Acknowledgments The authors wish to thank Marco Wiering and Gerard Vreeswijk for helpful comments. This research was made possible thanks to grant 612.066.514 of the dutch organization for scientific research (Nederlandse Organisatie voor Wetenschappelijk Onderzoek, NWO). References [1] C. J. C. H. Watkins. Learning from Delayed Rewards. PhD thesis, King?s College, Cambridge, England, 1989. [2] C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279?292, 1992. 8 [3] R. Bellman. Dynamic Programming. Princeton University Press, 1957. [4] T. Jaakkola, M. I. Jordan, and S. P. Singh. On the convergence of stochastic iterative dynamic programming algorithms. Neural Computation, 6:1185?1201, 1994. [5] J. N. Tsitsiklis. Asynchronous stochastic approximation and Q-learning. Machine Learning, 16:185?202, 1994. [6] M. L. Littman and C. Szepesv?ari. A generalized reinforcement-learning model: Convergence and applications. In L. Saitta, editor, Proceedings of the 13th International Conference on Machine Learning (ICML-96), pages 310?318, Bari, Italy, 1996. Morgan Kaufmann. [7] R. H. Crites and A. G. Barto. Improving elevator performance using reinforcement learning. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages 1017?1023, Cambridge MA, 1996. MIT Press. [8] W. D. Smart and L. P. Kaelbling. Effective reinforcement learning for mobile robots. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (ICRA 2002), pages 3404?3410, Washington, DC, USA, 2002. [9] M. A. Wiering and H. P. van Hasselt. Ensemble algorithms in reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 38(4):930?936, 2008. [10] A. L. Strehl, L. Li, E. Wiewiora, J. Langford, and M. L. Littman. PAC model-free reinforcement learning. In Proceedings of the 23rd international conference on Machine learning, pages 881?888. ACM, 2006. [11] M. J. Kearns and S. P. Singh. Finite-sample convergence rates for Q-learning and indirect algorithms. In Neural Information Processing Systems 12, pages 996?1002. MIT Press, 1999. [12] D. Ernst, P. Geurts, and L. Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6(1):503?556, 2005. [13] C. Szepesv?ari. The asymptotic convergence-rate of Q-learning. In NIPS ?97: Proceedings of the 1997 conference on Advances in neural information processing systems 10, pages 1064? 1070, Cambridge, MA, USA, 1998. MIT Press. [14] E. Even-Dar and Y. Mansour. Learning rates for Q-learning. Journal of Machine Learning Research, 5:1?25, 2003. [15] E. Van den Steen. Rational overoptimism (and other biases). American Economic Review, 94(4):1141?1151, September 2004. [16] J. E. Smith and R. L. Winkler. The optimizer?s curse: Skepticism and postdecision surprise in decision analysis. Management Science, 52(3):311?322, 2006. [17] E. Capen, R. Clapp, and T. Campbell. Bidding in high risk situations. Journal of Petroleum Technology, 23:641?653, 1971. [18] R. H. Thaler. Anomalies: The winner?s curse. Journal of Economic Perspectives, 2(1):191? 202, Winter 1988. [19] J. L. W. V. Jensen. Sur les fonctions convexes et les in?egalit?es entre les valeurs moyennes. Journal Acta Mathematica, 30(1):175?193, 1906. [20] S. P. Singh, T. Jaakkola, M. L. Littman, and C. Szepesv?ari. Convergence results for single-step on-policy reinforcement-learning algorithms. Machine Learning, 38(3):287?308, 2000. [21] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237?285, 1996. [22] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. The MIT press, Cambridge MA, 1998. [23] S. Pandey, D. Chakrabarti, and D. Agarwal. Multi-armed bandit problems with dependent arms. In Proceedings of the 24th international conference on Machine learning, pages 721? 728. ACM, 2007. [24] W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, pages 97?109, 1970. 9
3964 |@word mild:1 trial:3 steen:1 polynomial:9 norm:1 tried:1 contraction:1 p0:1 pick:1 boundedness:1 initial:2 contains:1 hasselt:2 comparing:1 nt:7 si:6 dx:9 must:2 skepticism:1 realistic:1 wiewiora:1 update:15 fund:1 greedy:3 leaf:1 weighing:1 selected:1 intelligence:1 ith:1 smith:1 pointer:1 five:1 chakrabarti:1 welldefined:1 consists:1 prove:5 manner:1 introduce:1 indeed:2 expected:25 multi:3 bellman:2 little:1 curse:3 actual:2 armed:2 increasing:2 estimating:3 underlying:3 notation:1 maximizes:3 what:2 maxa:12 finding:1 bootstrapping:1 guarantee:1 every:4 exactly:1 biometrika:1 qm:5 hit:1 grant:1 overestimate:4 positive:3 understood:1 limit:9 consequence:1 sutton:1 solely:1 black:1 acta:1 limited:2 averaged:2 phased:1 practical:1 acknowledgment:1 practice:2 rsa:3 regret:1 episodic:1 thought:1 integrating:1 get:1 selection:2 operator:3 context:1 impossible:1 applying:1 risk:1 measurable:2 deterministic:1 maximizing:1 straightforward:2 economics:1 starting:4 go:1 ergodic:1 survey:1 immediately:1 communicating:1 estimator:45 insight:1 variation:1 updated:6 pt:23 play:1 heavily:1 assuring:1 anomaly:1 programming:2 us:8 element:2 approximated:1 centrum:1 bari:1 ft:15 solved:1 wiering:2 ensures:3 episode:2 voor:1 decrease:2 mozer:1 environment:2 overestimation:20 reward:18 littman:4 dynamic:2 terminating:2 raise:1 singh:3 smart:1 negatively:1 bidding:1 indirect:1 effective:1 monte:2 artificial:1 heuristic:1 supplementary:1 solve:1 valued:4 larger:1 winkler:1 itself:1 sequence:1 propose:2 maximal:7 product:1 fmax:4 poorly:4 ernst:1 realistically:1 convergence:17 double:50 requirement:1 r1:1 cluster:2 gerard:1 converges:8 depending:1 qt:28 progress:1 implies:1 direction:1 correct:1 stochastic:10 exploration:5 material:1 argued:1 suffices:2 generalization:1 preliminary:1 sarsa:1 extension:2 accompanying:1 hold:7 around:1 considered:2 marco:1 normal:1 claim:1 optimizer:1 estimation:3 applicable:1 visited:2 nwo:1 hasselmo:1 weighted:1 mit:4 clearly:2 rather:2 avoid:2 bet:4 barto:2 jaakkola:2 mobile:1 entre:1 ax:1 focus:1 pdfs:4 contrast:3 dollar:3 helpful:1 saitta:1 dependent:4 dayan:1 a0:1 bandit:2 going:1 interested:1 selects:1 issue:1 arg:3 denoted:1 initialize:1 equal:8 construct:4 field:1 washington:1 sampling:1 look:1 icml:1 future:3 inherent:1 few:1 randomly:1 winter:1 delayed:5 elevator:1 maxj:1 replaced:1 organization:1 possibility:1 yielding:1 chain:1 accurate:1 integral:2 closer:1 experience:4 orthogonal:1 tree:1 walk:2 desired:1 theoretical:1 fitted:3 uncertain:1 instance:7 kaelbling:2 subset:2 conducted:1 optimally:1 stored:1 answer:1 combined:1 chooses:1 st:70 density:1 thanks:1 international:4 stay:1 off:5 quickly:2 na:1 thesis:1 management:1 containing:1 choose:3 possibly:1 payout:1 worse:1 american:1 li:1 account:1 lookup:1 automation:1 satisfy:1 caused:1 explicitly:1 later:2 performed:1 roulette:6 analyze:3 optimistic:2 red:1 contribution:1 variance:4 kaufmann:1 ensemble:1 yield:2 identify:1 iid:3 carlo:2 worth:2 cybernetics:1 unaffected:1 randomness:2 reach:2 suffers:1 touretzky:1 definition:2 underestimate:5 mathematica:1 obvious:1 proof:3 gain:1 stop:1 rational:1 popular:1 knowledge:2 subsection:2 color:1 improves:1 organized:1 back:1 campbell:1 higher:3 follow:1 maximally:1 improved:1 done:1 furthermore:2 xa:3 langford:1 until:1 sketch:2 receives:1 horizontal:1 hastings:1 mode:1 scientific:1 mdp:4 usa:2 name:1 consisted:1 unbiased:14 concept:1 inspiration:1 q0:1 moore:1 psa:3 game:2 generalized:1 trying:1 pdf:6 geurts:1 performs:6 fj:2 auction:1 consideration:1 fi:14 ari:3 functional:1 empirically:1 winner:2 discussed:1 interpretation:2 approximates:1 he:1 interpret:1 fare:1 refer:1 significant:1 cambridge:4 fonctions:1 rd:1 grid:6 similarly:1 i6:6 robot:1 recent:1 showed:1 perspective:1 italy:1 store:2 axa:1 inequality:3 continue:1 seen:1 morgan:1 additional:1 converge:4 determine:4 maximize:2 monotonically:1 multiple:1 full:2 desirable:1 technical:1 england:1 impact:1 variant:1 expectation:4 dutch:1 hado:1 sometimes:3 iteration:3 agarwal:1 robotics:1 receive:1 szepesv:3 else:1 biased:4 sure:1 strict:2 comment:1 virtually:1 jordan:1 call:1 presence:1 split:1 identically:1 easy:2 enough:1 xj:3 reduce:2 economic:2 synchronous:1 whether:3 optimism:4 wiskunde:1 penalty:1 suffer:5 cause:1 action:54 dar:1 discount:4 informatica:1 dotted:1 estimated:3 fulfilled:1 per:6 discrete:2 write:1 group:1 drawn:2 sum:2 run:1 fourth:1 uncertainty:2 named:1 extends:1 place:1 almost:2 p3:1 draw:1 decision:3 comparable:1 def:9 ct:3 pay:1 occur:2 aspect:1 speed:2 argument:1 optimality:1 qb:25 performing:1 betting:4 according:4 convexes:1 poor:3 beneficial:2 slightly:1 making:3 happens:1 s1:1 intuitively:1 outlier:1 den:1 taken:1 equation:4 mutually:1 discus:2 turn:3 vreeswijk:1 merit:1 end:4 fia:1 available:1 apply:4 observe:1 away:1 alternative:3 encounter:1 batch:1 original:2 denotes:2 top:1 include:2 ensure:1 tba:1 wehenkel:1 exploit:1 icra:1 question:1 occurs:1 rt:7 exclusive:1 september:1 separate:1 thank:1 topic:1 trivial:1 sur:1 illustration:1 mostly:1 unfortunately:1 negative:2 ba:15 proper:5 policy:15 unknown:1 perform:2 upper:1 markov:2 sm:1 finite:4 petroleum:1 immediate:1 payoff:1 situation:1 dc:1 mansour:1 overlooked:1 introduced:1 pair:2 clapp:1 nip:1 qa:26 below:1 pattern:1 xm:2 max:15 including:2 overlap:1 event:1 arm:2 technology:1 mdps:3 imply:1 thaler:1 concludes:1 review:1 asymptotic:1 loss:1 expect:1 interesting:3 proven:3 var:2 agent:5 editor:2 playing:2 fta:1 strehl:1 row:5 repeat:1 asynchronous:1 free:1 tsitsiklis:1 bias:11 allow:1 face:2 taking:2 van:3 distributed:2 calculated:1 transition:6 cumulative:1 ending:1 avoids:1 unaware:1 world:5 maze:1 adaptive:1 reinforcement:15 author:1 made:1 correlate:1 transaction:1 approximate:6 qlearning:2 ignore:1 xi:23 continuous:3 iterative:1 pandey:1 why:1 table:1 additionally:1 learn:1 transfer:1 symmetry:1 improving:1 constructing:1 crites:1 noise:1 positively:2 x1:2 gambling:2 experienced:1 position:1 wish:1 exponential:2 watkins:3 third:1 theorem:4 xt:11 pac:1 jensen:2 maxi:18 phd:1 surprise:1 depicted:1 happening:1 expressed:2 acm:2 cdf:4 ma:3 goal:2 king:1 man:1 considerable:1 change:1 included:2 determined:1 infinite:4 averaging:1 lemma:11 kearns:1 called:4 total:1 e:1 player:3 underestimation:4 fja:1 select:1 college:1 princeton:1 correlated:1
3,273
3,965
Network Flow Algorithms for Structured Sparsity Julien Mairal? INRIA - Willow Project-Team? [email protected] Rodolphe Jenatton? INRIA - Willow Project-Team? [email protected] Guillaume Obozinski INRIA - Willow Project-Team? [email protected] Francis Bach INRIA - Willow Project-Team? [email protected] Abstract We consider a class of learning problems that involve a structured sparsityinducing norm defined as the sum of ?? -norms over groups of variables. Whereas a lot of effort has been put in developing fast optimization methods when the groups are disjoint or embedded in a specific hierarchical structure, we address here the case of general overlapping groups. To this end, we show that the corresponding optimization problem is related to network flow optimization. More precisely, the proximal problem associated with the norm we consider is dual to a quadratic min-cost flow problem. We propose an efficient procedure which computes its solution exactly in polynomial time. Our algorithm scales up to millions of variables, and opens up a whole new range of applications for structured sparse models. We present several experiments on image and video data, demonstrating the applicability and scalability of our approach for various problems. 1 Introduction Sparse linear models have become a popular framework for dealing with various unsupervised and supervised tasks in machine learning and signal processing. In such models, linear combinations of small sets of variables are selected to describe the data. Regularization by the ?1 -norm has emerged as a powerful tool for addressing this combinatorial variable selection problem, relying on both a well-developed theory (see [1] and references therein) and efficient algorithms [2, 3, 4]. The ?1 -norm primarily encourages sparse solutions, regardless of the potential structural relationships (e.g., spatial, temporal or hierarchical) existing between the variables. Much effort has recently been devoted to designing sparsity-inducing regularizations capable of encoding higher-order information about allowed patterns of non-zero coefficients [5, 6, 7, 8, 9, 10], with successful applications in bioinformatics [6, 11], topic modeling [12] and computer vision [9, 10]. By considering sums of norms of appropriate subsets, or groups, of variables, these regularizations control the sparsity patterns of the solutions. The underlying optimization problem is usually difficult, in part because it involves nonsmooth components. Proximal methods have proven to be effective in this context, essentially because of their fast convergence rates and their scalability [3, 4]. While the settings where the penalized groups of variables do not overlap or are embedded in a treeshaped hierarchy [12] have already been studied, regularizations with general overlapping groups have, to the best of our knowledge, never been addressed with proximal methods. This paper makes the following contributions: ? It shows that the proximal operator associated with the structured norm we consider can be ? ? Contributed equally. Laboratoire d?Informatique de l?Ecole Normale Sup?erieure (INRIA/ENS/CNRS UMR 8548) 1 computed with a fast and scalable procedure by solving a quadratic min-cost flow problem. ? It shows that the dual norm of the sparsity-inducing norm we consider can also be evaluated efficiently, which enables us to compute duality gaps for the corresponding optimization problems. ? It demonstrates that our method is relevant for various applications, from video background subtraction to estimation of hierarchical structures for dictionary learning of natural image patches. 2 Structured Sparse Models We consider in this paper convex optimization problems of the form min f (w) + ??(w), w?Rp (1) where f : Rp ? R is a convex differentiable function and ? : Rp ? R is a convex, nonsmooth, sparsity-inducing regularization function. When one knows a priori that the solutions of this learning problem have only a few non-zero coefficients, ? is often chosen to be the ?1 -norm (see [1, 2]). When these coefficients are organized in groups, a penalty encoding explicitly this prior knowledge can improve the prediction performance and/or interpretability of the learned models [13, 14]. Denoting by G a set of groups of indices, such a penalty might for example take the form: X X ?(w) , ?g max |wj | = ?g kwg k? , (2) g?G j?g g?G where wj is the j-th entry of w for j in [1; p] , {1, . . . , p}, the vector wg in R|g| records the coefficients of w indexed by g in G, and the scalars ?g are positive weights. A sum of ?2 -norms is also used in the literature [7], but the ?? -norm is piecewise linear, a property that we take advantage of in this paper. Note that when G is the set of singletons of [1; p], we get back the ?1 -norm. If G is a more general partition of [1; p], variables are selected in groups rather than individually. When the groups overlap, ? is still a norm and sets groups of variables to zero together [5]. The latter setting has first been considered for hierarchies [7, 11, 15], and then extended to general group structures [5].1 Solving Eq. (1) in this context becomes challenging and is the topic of this paper. Following Jenatton et al. [12] who tackled the case of hierarchical groups, we propose to approach this problem with proximal methods, which we now introduce. 2.1 Proximal Methods In a nutshell, proximal methods can be seen as a natural extension of gradient-based techniques, and they are well suited to minimizing the sum f + ?? of two convex terms, a smooth function f ?continuously differentiable with Lipschitz-continuous gradient? and a potentially non-smooth function ?? (see [16] and references therein). At each iteration, the function f is linearized at the current estimate w0 and the so-called proximal problem has to be solved: L min f (w0 ) + (w ? w0 )? ?f (w0 ) + ??(w) + kw ? w0 k22 . w?Rp 2 The quadratic term keeps the solution in a neighborhood where the current linear approximation holds, and L > 0 is an upper bound on the Lipschitz constant of ?f . This problem can be rewritten as 1 2 min ku ? wk2 + ?? ?(w), (3) w?Rp 2 with ?? , ?/L, and u , w0 ? L1 ?f (w0 ). We call proximal operator associated with the regularization ?? ? the function that maps a vector u in Rp onto the (unique, by strong convexity) solution w? of Eq. (3). Simple proximal methods use w? as the next iterate, but accelerated variants [3, 4] are also based on the proximal operator and require to solve problem (3) exactly and efficiently to enjoy their fast convergence rates. Note that when ? is the ?1 -norm, the solution of Eq. (3) is obtained by soft-thresholding [16]. The approach we develop in the rest of this paper extends [12] to the case of general overlapping groups when ? is a weighted sum of ?? -norms, broadening the application of these regularizations to a wider spectrum of problems.2 1 Note that other types of structured sparse models have also been introduced, either through a different norm [6], or through non-convex criteria [8, 9, 10]. 2 For hierarchies, the approach of [12] applies also to the case of where ? is a weighted sum of ?2 -norms. 2 3 A Quadratic Min-Cost Flow Formulation In this section, we show that a convex dual of problem (3) for general overlapping groups G can be reformulated as a quadratic min-cost flow problem. We present an efficient algorithm to solve it exactly, as well as a related algorithm to compute the dual norm of ?. We start by considering the dual formulation to problem (3) introduced in [12], for the case where ? is a sum of ?? -norms: Lemma 1 (Dual of the proximal problem [12]) Given u in Rp , consider the problem X 1 2 min u? ? g s.t. ?g ? G, k? g k1 ? ??g 2 ??Rp?|G| 2 and ? gj = 0 if j ? / g, (4) g?G where ? = (? g )g?G is in Rp?|G| , and ? gj denotes the j-th coordinate of the vector ? g . Then, every P solution ? ? = (? ?g )g?G of Eq. (4) satisfies w? = u? g?G ? ?g , where w? is the solution of Eq. (3). Without loss of generality,3 we assume from now on that the scalars uj are all non-negative, and we constrain the entries of ? to be non-negative. We now introduce a graph modeling of problem (4). 3.1 Graph Model Let G be a directed graph G = (V, E, s, t), where V is a set of vertices, E ? V ? V a set of arcs, s a source, and t a sink. Let c and c? be two functions on the arcs, c : E ? R and c? : E ? R+ , where c is a cost function and c? is a non-negative capacity function. A flow is a non-negative function on arcs that satisfies capacity constraints on all arcs (the value of the flow on an arc is less than or equal to the arc capacity) and conservation constraints on all vertices (the sum of incoming flows at a vertex is equal to the sum of outgoing flows) except for the source and the sink. We introduce a canonical graph G associated with our optimization problem, and uniquely characterized by the following construction: (i) V is the union of two sets of vertices Vu and Vgr , where Vu contains exactly one vertex for each index j in [1; p], and Vgr contains exactly one vertex for each group g in G. We thus have |V | = |G| + p. For simplicity, we identify groups and indices with the vertices of the graph. (ii) For every group g in G, E contains an arc (s, g). These arcs have capacity ??g and zero cost. (iii) For every group g in G, and every index j in g, E contains an arc (g, j) with zero cost and infinite capacity. We denote by ? gj the flow on this arc. (iv) For every index j in [1; p], E contains an arc (j, t) with infinite capacity and a cost cj , 21 (uj ? ??j )2 , where ??j is the flow on (j, t). Note that by flow conservation, we necesP sarily have ??j = g?G ? gj . Examples of canonical graphs are given in Figures 1(a)-(c). The flows ? gj associated with G can now be identified with the variables of problem (4): indeed, the sum of the costs on the edges leading to the sink is equal to the objective function of (4), while the capacities of the arcs (s, g) match the constraints on each group. This shows that finding a flow minimizing the sum of the costs on such a graph is equivalent to solving problem (4). When some groups are included in others, the canonical graph can be simplified to yield a graph with a smaller number of edges. Specifically, if h and g are groups with h ? g, the edges (g, j) for j ? h carrying a flow ? gj can be removed and replaced by a single edge (g, h) of infinite capacity and P zero cost, carrying the flow j?h ? gj . This simplification is illustrated in Figure 1(d), with a graph ? equivalent to the one of Figure 1(c). This does not change the optimal value of ?? , which is the quantity of interest for computing the optimal primal variable w? (a proof and a formal definition of these equivalent graphs are available in a longer technical report [17]). These simplifications are useful in practice, since they reduce the number of edges in the graph and improve the speed of the algorithms we are now going to present. 3 Let ?? denote a solution of Eq. (4). Optimality conditions of Eq. (4) derived in [12] show that for all j in [1; p], the signs of the non-zero coefficients ??g j for g in G are the same as the signs of the entries uj . To solve Eq. (4), one can therefore flip the signs of the negative variables uj , then solve the modified dual formulation (with non-negative variables), which gives the magnitude of the entries ??g j (the signs of these being known). 3 s s ? g1 +? g2 +? g3 ? ??g ? g1 +? g2 ? ??g g ? g1 u1 ? g2 g ??2 , c2 ? g1 ? g3 u2 ??1 , c1 ? h2 +? h3 ? ??h u3 ? g2 u1 ??3 , c3 h ? h3 ? h2 u2 ??1 , c1 u3 ??2 , c2 ??3 , c3 t t (a) G = {g = {1, 2, 3}}. (b) G = {g = {1, 2}, h = {2, 3}}. s s ? h2 +? h3 ? ??h ? g1 +? g2 +? g3 ? ??g g ? g1 u1 u2 ??1 , c1 ??2 , c2 g h g ? h2 ? 3 ? g2 ? h2 +? h3 ? ??h ? g1 +? g2 +? g3 ? ??g ? g1 ? h3 u3 u1 ??3 , c3 h ? g2 +? h2 ? g3 +? h3 u2 ??1 , c1 t ? g2 +? g3 ??2 , c2 u3 ??3 , c3 t (c) G = {g = {1, 2, 3}, h = {2, 3}}. (d) G = {g = {1} ? h, h = {2, 3}}. Figure 1: Graph representation of simple proximal problems with different group structures G. The three indices 1, 2, 3 are represented as grey squares, and the groups g, h in G as red discs. The source is linked to every group g, h with respective maximum capacity ??g , ??h and zero cost. Each variable uj is linked to the sink t, with an infinite capacity, and with a cost cj , 12 (uj ? ??j )2 . All other arcs in the graph have zero cost and infinite capacity. They represent inclusion relationships in-between groups, and between groups and variables. The graphs (c) and (d) correspond to a special case of tree-structured hierarchy in the sense of [12]. Their min-cost flow problems are equivalent. 3.2 Computation of the Proximal Operator Quadratic min-cost flow problems have been well studied in the operations research literature [18]. One of the simplest cases, where G contains a single group g (? is the ?? -norm) as in Figure 1(a), can be solved by an orthogonal projection on the ?1 -ball of radius ??g . It has been shown that such a projection can be done in O(p) operations [18, 19]. When the group structure is a tree as in Figure 1(d), the problem can be solved in O(pd) operations, where d is the depth of the tree [12, 18].4 The general case of overlapping groups is more difficult. Hochbaum and Hong have shown in [18] that quadratic min-cost flow problems can be reduced to a specific parametric max-flow problem, for which an efficient algorithm exists [20].5 While this generic approach could be used to solve Eq. (4), we propose to use Algorithm 1 that also exploits the fact that our graphs have non-zero costs only on edges leading to the sink. As shown in the technical report [17], it has a significantly better performance in practice. This algorithm clearly shares some similarities with existing approaches in network flow optimization such as the simplified version of [20] presented in [21] that uses a divide and conquer strategy. Moreover, we have discovered after that this paper was accepted for publication that an equivalent algorithm exists for minimizing convex functions over polymatroid 4 When restricted to the case where ? is a sum of ?? -norms, the approach of [12] is in fact similar to [18]. By definition, a parametric max-flow problem consists in solving, for every value of a parameter, a maxflow problem on a graph whose arc capacities depend on this parameter. 5 4 sets [22]. This equivalence, however, requires a non-trivial representation of structured sparsityinducing norms with submodular functions, as recently pointed out by [23]. Algorithm 1 Computation of the proximal operator for overlapping groups. 1: Inputs: u ? Rp , a set of groups G, positive weights (?g )g?G , and ? (regularization parameter). 2: Build the initial graph G0 = (V0 , E0 , s, t) as explained in Section 3.2. 3: Compute the optimal flow: ?? ? computeFlow(V0 , E0 ). 4: Return: w = u ? ?? (optimal solution of the proximal problem). Function computeFlow(V = Vu ? Vgr , E) P P P 1: Projection step: ? ? arg min? j?Vu 12 (uj ? ? j )2 s.t. g?Vgr ?g . j?Vu ? j ? ? 2: For all nodes j in Vu , set ? j to be the capacity of the arc (j, t). 3: Max-flow step: Update (??j )j?Vu by computing a max-flow on the graph (V, E, s, t). 4: if ? j ? Vu s.t. ??j 6= ? j then 5: Denote by (s, V + ) and (V ? , t) the two disjoint subsets of (V, s, t) separated by the minimum (s, t)-cut of the graph, and remove the arcs between V + and V ? . Call E + and E ? the two remaining disjoint subsets of E corresponding to V + and V ? . 6: (??j )j?Vu+ ? computeFlow(V + , E + ). 7: (??j )j?Vu? ? computeFlow(V ? , E ? ). 8: end if 9: Return: (??j )j?Vu . TheP intuition behind this algorithm is the following: The first step looks for a candidate value for ??= g?G ? g by solving a relaxed version of problem Eq. (4), where the constraints k? g k1 ? ??g are P ? ? 1?? dropped and replaced by a single one k?k g?G ?g . The relaxed problem only depends on ? 2 and can be solved in linear time. By calling its solution ?, it provides a lower bound ku ? ?k2 /2 on the optimal cost. Then, the second step tries to find a feasible flow of the original problem (4) such that the resulting vector ?? matches ?, which is in fact a max-flow problem [24]. If ?? = ?, then the cost of the flow reaches the lower bound, and the flow is optimal. If ?? 6= ?, the lower bound is not achievable, and we construct a minimum (s, t)-cut of the graph [25] that defines two disjoints sets of nodes V + and V ? ; V + is the part of the graph that could potentially have received more flow from the source (the arcs between s and V + are not saturated), whereas all arcs linking s to V ? are saturated. At this point, it is possible to show that the value of the optimal min-cost flow on all arcs between V + and V ? is necessary zero. Thus, removing them yields an equivalent optimization problem, which can be decomposed into two independent problems of smaller sizes and solved recursively by the calls to computeFlow(V + , E + ) and computeFlow(V ? , E ? ). A formal proof of correctness of Algorithm 1 and further details are relegated to [17]. The approach of [18, 20] is guaranteed to have the same worst-case complexity as a single max-flow algorithm. However, we have experimentally observed a significant discrepancy between the worst case and empirical complexities for these flow problems, essentially because the empirical cost of each max-flow is significantly smaller than its theoretical cost. Despite the fact that the worst-case guarantee of our algorithm is weaker than their (up to a factor |V |), it is more adapted to the structure of our graphs and has proven to be much faster in our experiments (see technical report [17]). Some implementation details are crucial to the efficiency of the algorithm: ? Exploiting connected components: When there exists no arc between two subsets of V , it is possible to process them independently in order to solve the global min-cost flow problem. ? Efficient max-flow algorithm: We have implemented the ?push-relabel? algorithm of [24] for solving our max-flow problems, using classical heuristics that significantly speed it up in practice (see [24, 26]). This algorithm leverages the concept of pre-flow that relaxes the definition of flow and allows vertices to have a positive excess. It can be initialized with any valid pre-flow, enabling warm-restarts when the max-flow is called several times as in our algorithm. ? ImprovedP projection step: The first P line of the function Preplaced by P computeFlow can be ? ? arg min? j?Vu 12 (uj ? ? j )2 s.t. g?j ?g . The g?Vgr ?g and |? j | ? ? j?Vu ? j ? ? P idea is that the structure of the graph will not allow ??j to be greater than ? g?j ?g after the maxflow step. Adding these additional constraints leads to better performance when the graph is not well balanced. This modified projection step can still be computed in linear time [19]. 5 3.3 Computation of the Dual Norm The dual norm ?? of ?, defined for any vector ? in Rp by ?? (?) , max?(z)?1 z? ?, is a key quantity to study sparsity-inducing regularizations [5, 15, 27]. We use it here to monitor the convergence of the proximal method through a duality gap, and define a proper optimality criterion for problem (1). We denote by f ? the Fenchel conjugate of f [28], defined by f ? (?) , supz [z? ? ? f (z)]. The duality gap for problem (1) can be derived from standard Fenchel duality arguments [28] and it is equal to f (w) + ??(w) + f ? (??) for w, ? in Rp with ?? (?) ? ?. Therefore, evaluating the duality gap requires to compute efficiently ?? in order to find a feasible dual variable ?. This is equivalent to solving another network flow problem, based on the following variational formulation: X ?? (?) = min ? s.t. ? g = ?, and ?g ? G, k? g k1 ? ? ?g with ? gj = 0 if j ? / g. (5) ??Rp?|G| g?G In the network problem associated with (5), the capacities on the arcs (s, g), g ? G, are set to ? ?g , and the capacities on the arcs (j, t), j in [1; p], are fixed to ?j . Solving problem (5) amounts to finding the smallest value of ? , such that there exists a flow saturating the capacities ?j on the arcs leading to the sink t (i.e., ?? = ?). The algorithm below is proven to be correct in [17]. Algorithm 2 Computation of the dual norm. 1: Inputs: ? ? Rp , a set of groups G, positive weights (?g )g?G . 2: Build the initial graph G0 = (V0 , E0 , s, t) as explained in Section 3.3. 3: ? ? dualNorm(V0 , E0 ). 4: Return: ? (value of the dual norm). Function dualNorm(V = Vu ? Vgr , E) P P 1: ? ? ( j?Vu ?j )/( g?Vgr ?g ) and set the capacities of arcs (s, g) to ? ?g for all g in Vgr . 2: Max-flow step: Update (??j )j?Vu by computing a max-flow on the graph (V, E, s, t). 3: if ? j ? Vu s.t. ??j 6= ?j then 4: Define (V + , E + ) and (V ? , E ? ) as in Algorithm 1, and set ? ? dualNorm(V ? , E ? ). 5: end if 6: Return: ? . 4 Applications and Experiments Our experiments use the algorithm of [4] based on our proximal operator, with weights ?g set to 1. 4.1 Speed Comparison We compare our method (ProxFlow) and two generic optimization techniques, namely a subgradient descent (SG) and an interior point method,6 on a regularized linear regression problem. Both SG and ProxFlow are implemented in C++. Experiments are run on a single-core 2.8 GHz CPU. We consider a design matrix X in Rn?p built from overcomplete dictionaries of discrete cosine transforms (DCT), which are naturally organized on one- or two-dimensional grids and display local correlations. The following families of groups G using this spatial information are thus considered: (1) every contiguous sequence of length 3 for the one-dimensional case, and (2) every 3?3-square in the two-dimensional setting. We generate vectors y in Rn according to the linear model y = Xw0 + ?, where ? ? N (0, 0.01kXw0 k22 ). The vector w0 has about 20% percent nonzero components, randomly selected, while respecting the structure of G, and uniformly generated between [?1, 1]. In our experiments, the regularization parameter ? is chosen to achieve the same sparsity as w0 . For SG, we take the step size to be equal to a/(k + b), where k is the iteration number, and (a, b) are the best parameters selected in {10?3 , . . . , 10}?{102 , 103 , 104 }. For the interior point methods, since problem (1) can be cast either as a quadratic (QP) or as a conic program (CP), we show in Figure 2 the results for both formulations. Our approach compares favorably with the other methods, on three problems of different sizes, (n, p) ? {(100, 103 ), (1024, 104 ), (1024, 105 )}, see Figure 2. In addition, note that QP, CP and SG do not obtain sparse solutions, whereas ProxFlow does. We have also run ProxFlow and SG on a larger dataset with (n, p) = (100, 106 ): after 12 hours, ProxFlow and SG have reached a relative duality gap of 0.0006 and 0.02 respectively.7 6 7 In our simulations, we use the commercial software Mosek, http://www.mosek.com/. Due to the computational burden, QP and CP could not be run on every problem. 6 ?4 ?6 ?8 ?10 ?2 CP QP ProxFlow SG ?1 0 1 2 log(CPU time) in seconds n=1024, p=10000, two?dimensional DCT 2 0 ?2 ?4 ?6 ?8 ?10 ?2 CP ProxFlow SG 0 2 log(CPU time) in seconds 4 log(relative distance to optimum) 0 ?2 log(relative distance to optimum) log(relative distance to optimum) n=100, p=1000, one?dimensional DCT 2 n=1024, p=100000, one?dimensional DCT 2 0 ?2 ?4 ?6 ?8 ?10 ?2 ProxFlow SG 0 2 log(CPU time) in seconds 4 Figure 2: Speed comparisons: distance to the optimal primal value versus CPU time (log-log scale).6 Figure 3: From left to right: original image y; estimated background Xw; foreground (the sparsity pattern of e used as mask on y) estimated with ?1 ; foreground estimated with ?1 + ?; another foreground obtained with ?, on a different image, with the same values of ?1 , ?2 as for the previous image. For the top row, the percentage of pixels matching the ground truth is 98.8% with ?, 87.0% without. As for the bottom row, the result is 93.8% with ?, 90.4% without (best seen in color). 4.2 Background Subtraction Following [9, 10], we consider a background subtraction task. Given a sequence of frames from a fixed camera, we try to segment out foreground objects in a new image. If we denote by y ? Rn a test image, we model y as a sparse linear combination of p other images X ? Rn?p , plus an error term e in Rn , i.e., y ? Xw + e for some sparse vector w in Rp . This approach is reminiscent of [29] in the context of face recognition, where e is further made sparse to deal with occlusions. The term Xw accounts for background parts present in both y and X, while e contains specific, or foreground, objects in y. The resulting optimization problem is minw,e 21 ky ? Xw ? ek22 + ?1 kwk1 + ?2 kek1 , with ?1 , ?2 ? 0. In this formulation, the ?1 -norm penalty on e does not take into account the fact that neighboring pixels in y are likely to share the same label (background or foreground), which may lead to scattered pieces of foreground and background regions (Figure 3). We therefore put an additional structured regularization term ? on e, where the groups in G are all the overlapping 3?3-squares on the image. A dataset with hand-segmented evaluation images is used to illustrate the effect of ?.8 For simplicity, we use a single regularization parameter, i.e., ?1 = ?2 , chosen to maximize the number of pixels matching the ground truth. We consider p = 200 images with n = 57600 pixels (i.e., a resolution of 120?160, times 3 for the RGB channels). As shown in Figure 3, adding ? improves the background subtraction results for the two tested videos, by encoding, unlike the ?1 -norm, both spatial and color consistency. 4.3 Multi-Task Learning of Hierarchical Structures In [12], Jenatton et al. have recently proposed to use a hierarchical structured norm to learn dictionaries of natural image patches. Following this work, we seek to represent n signals {y1 , . . . , yn } of dimension m as sparse linear combinations of elements from a dictionary X = [x1 , . . . , xp ] in Rm?p . This can be expressed for all i in [1; n] as yi ? Xwi , for some sparse vector wi in Rp . In [12], the dictionary elements are embedded in a predefined tree T , via a particular instance of the structured norm ?; we refer to it as ?tree , and call G the underlying set of groups. In this case, each signal yi admits a sparse decomposition in the form of a subtree of dictionary elements. 8 http://research.microsoft.com/en-us/um/people/jckrumm/wallflower/testimages.htm 7 Inspired by ideas from multi-task learning [14], we propose to learn the tree structure T by pruning irrelevant parts of a larger initial tree T0 . We achieve this by using an additional regularization term ?joint across the different decompositions, so that subtrees of T0 will simultaneously be removed for all signals yi . In other words, the approach of [12] is extended by the following formulation: n i 1 Xh 1 i ky ? Xwi k22 + ?1 ?tree (wi ) +?2 ?joint (W), s.t. kxj k2 ? 1, for all j in [1; p], (6) min X,W n 2 i=1 where W , [w1 , . . . , wn ] is the matrix of decomposition coefficients in Rp?n . The new regularP ization term operates on the rows of W and is defined as ?joint (W) , g?G maxi?[1;n] |wgi |.9 The overall penalty on W, which results from the combination of ?tree and ?joint , is itself an instance of ? with general overlapping groups, as defined in Eq (2). Mean Square Error To address problem (6), we use the same optimization scheme as [12], i.e., alternating between X and W, fixing one variable while optimizing with respect to the other. The task we consider is the denoising of natural image patches, with the same dataset and protocol as [12]. We study whether learning the hierarchy of the dictionary elements improves the denoising performance, compared to standard sparse coding (i.e., when ?tree is the ?1 -norm and ?2 = 0) and the hierarchical dictionary learning of [12] based on predefined trees (i.e., ?2 = 0). The dimensions of the training set ? 50 000 patches of size 8?8 for dictionaries with up to p = 400 elements ? impose to handle large graphs, with |E| ? |V | ? 4.107 . Since problem (6) is too large to be solved many times to select the regularization parameters (?1 , ?2 ) rigorously, we use the following heuristics: we optimize mostly with the currently pruned tree held fixed (i.e., ?2 = 0), and only prune the tree (i.e., ?2 > 0) every few steps on a random subset of 10 000 patches. We consider the same hierarchies as in [12], involving between 30 and 400 dictionary elements. The regularization parameter ?1 is selected on the validation set of 25 000 patches, for both sparse coding (Flat) and hierarchical dictionary learning (Tree). Starting from the tree giving the best performance (in this case the largest one, see Figure 4), we solve problem (6) following our heuristics, for increasing values of ?2 . As shown in Figure 4, there is a regime where our approach performs significantly better than the two other compared methods. The standard deviation of the noise is 0.2 (the pixels have values in [0, 1]); no significant improvements were observed for lower levels of noise. Denoising Experiment: Mean Square Error 0.21 Flat Tree Multi?task Tree 0.2 0.19 0 100 200 300 Dictionary Size 400 Figure 4: Left: Hierarchy obtained by pruning a larger tree of 76 elements. Right: Mean square error versus dictionary size. The error bars represent two standard deviations, based on three runs. 5 Conclusion We have presented a new optimization framework for solving sparse structured problems involving sums of ?? -norms of any (overlapping) groups of variables. Interestingly, this sheds new light on connections between sparse methods and the literature of network flow optimization. In particular, the proximal operator for the formulation we consider can be cast as a quadratic min-cost flow problem, for which we propose an efficient and simple algorithm. This allows the use of accelerated gradient methods. Several experiments demonstrate that our algorithm can be applied to a wide class of learning problems, which have not been addressed before within sparse methods. Acknowledgments This paper was partially supported by the European Research Council (SIERRA Project). The authors would like to thank Jean Ponce for interesting discussions and suggestions. 9 The simplified case where ?tree and ?joint are the ?1 - and mixed ?1 /?2 -norms [13] corresponds to [30]. 8 References [1] P. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat., 37(4):1705?1732, 2009. [2] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Ann. Stat., 32(2):407?499, 2004. [3] Y. Nesterov. Gradient methods for minimizing composite objective function. Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain, 2007. [4] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci., 2(1):183?202, 2009. [5] R. Jenatton, J-Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. Technical report, 2009. Preprint arXiv:0904.3523v1. [6] L. Jacob, G. Obozinski, and J.-P. Vert. Group Lasso with overlap and graph Lasso. In Proc. ICML, 2009. [7] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical variable selection. Ann. Stat., 37(6A):3468?3497, 2009. [8] R. G. Baraniuk, V. Cevher, M. Duarte, and C. Hegde. Model-based compressive sensing. IEEE T. Inform. Theory, 2010. to appear. [9] V. Cehver, M.F. Duarte, C. Hedge, and R.G. Baraniuk. Sparse signal recovery using markov random fields. In Adv. NIPS, 2008. [10] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In Proc. ICML, 2009. [11] S. Kim and E. P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In Proc. ICML, 2010. [12] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In Proc. ICML, 2010. [13] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J. Roy. Stat. Soc. B, 68:49?67, 2006. [14] G. Obozinski, B. Taskar, and M. I. Jordan. Joint covariate selection and joint subspace selection for multiple classification problems. Stat. Comput., 20(2):231?252, 2010. [15] F. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In Adv. NIPS, 2008. [16] P. L. Combettes and J.-C. Pesquet. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer, 2010. [17] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Network flow algorithms for structured sparsity. Technical report, 2010. Preprint arXiv:1008.5209v1. [18] D. S. Hochbaum and S. P. Hong. About strongly polynomial time algorithms for quadratic optimization over submodular constraints. Math. Program., 69(1):269?309, 1995. [19] P. Brucker. An O(n) algorithm for quadratic knapsack problems. Oper. Res. Lett., 3:163?166, 1984. [20] G. Gallo, M. E. Grigoriadis, and R. E. Tarjan. A fast parametric maximum flow algorithm and applications. SIAM J. Comput., 18:30?55, 1989. [21] M. Babenko and A.V. Goldberg. Experimental evaluation of a parametric flow algorithm. Technical report, Microsoft Research, 2006. MSR-TR-2006-77. [22] H. Groenevelt. Two algorithms for maximizing a separable concave function over a polymatroid feasible region. Eur. J. Oper. Res., pages 227?236, 1991. [23] F. Bach. Structured sparsity-inducing norms through submodular functions. In Adv. NIPS, 2010. [24] A. V. Goldberg and R. E. Tarjan. A new approach to the maximum flow problem. In Proc. of ACM Symposium on Theory of Computing, 1986. [25] L. R. Ford and D. R. Fulkerson. Maximal flow through a network. Canadian J. Math., 8(3), 1956. [26] B. V. Cherkassky and A. V. Goldberg. On implementing the pushrelabel method for the maximum flow problem. Algorithmica, 19(4):390?410, 1997. [27] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. In Adv. NIPS, 2009. [28] J. M. Borwein and A. S. Lewis. Convex analysis and nonlinear optimization. Springer, 2006. [29] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE T. Pattern. Anal., pages 210?227, 2008. [30] P. Sprechmann, I. Ramirez, G. Sapiro, and Y. C. Eldar. Collaborative hierarchical sparse modeling. Technical report, 2010. Preprint arXiv:1003.0400v1. 9
3965 |@word msr:1 version:2 achievable:1 polynomial:2 norm:36 open:1 grey:1 simulation:1 linearized:1 rgb:1 seek:1 decomposition:3 jacob:1 tr:1 recursively:1 initial:3 contains:7 ecole:1 denoting:1 interestingly:1 existing:2 current:2 com:2 babenko:1 reminiscent:1 dct:4 partition:1 enables:1 remove:1 update:2 selected:5 core:2 record:1 provides:1 math:2 node:2 zhang:1 c2:4 become:1 symposium:1 yuan:1 consists:1 introduce:3 mask:1 indeed:1 brucker:1 multi:4 inspired:1 relying:1 decomposed:1 cpu:5 considering:2 increasing:1 becomes:1 project:5 underlying:2 moreover:1 developed:1 compressive:1 unified:1 finding:2 guarantee:1 temporal:1 sapiro:1 every:11 concave:1 nutshell:1 shed:1 exactly:5 um:1 demonstrates:1 k2:2 rm:1 control:1 imag:1 enjoy:1 yn:1 appear:1 positive:4 before:1 dropped:1 local:1 engineering:1 despite:1 encoding:3 inria:9 might:1 umr:1 therein:2 studied:2 wk2:1 equivalence:1 plus:1 challenging:1 dantzig:1 range:1 directed:1 unique:1 camera:1 acknowledgment:1 vu:17 union:1 practice:3 procedure:2 ek22:1 maxflow:2 empirical:2 significantly:4 composite:2 projection:5 matching:2 pre:2 word:1 vert:1 get:1 onto:1 interior:2 selection:6 operator:7 put:2 context:3 www:1 equivalent:7 map:1 optimize:1 center:1 hegde:1 maximizing:1 regardless:1 starting:1 independently:1 convex:8 resolution:1 simplicity:2 recovery:1 splitting:1 decomposable:1 estimator:1 supz:1 rocha:1 fulkerson:1 handle:1 coordinate:1 hierarchy:7 construction:1 commercial:1 sparsityinducing:2 us:1 designing:1 goldberg:3 element:7 roy:1 recognition:2 econometrics:1 cut:2 observed:2 bottom:1 taskar:1 preprint:3 solved:6 worst:3 wj:2 region:2 connected:1 adv:4 removed:2 balanced:1 intuition:1 pd:1 convexity:1 complexity:2 respecting:1 nesterov:1 rigorously:1 carrying:2 solving:9 depend:1 segment:1 efficiency:1 sink:6 htm:1 joint:7 kxj:1 various:3 represented:1 separated:1 informatique:1 fast:6 describe:1 effective:1 neighborhood:1 whose:1 emerged:1 heuristic:3 solve:7 larger:3 jean:1 wg:1 g1:8 itself:1 ford:1 advantage:1 differentiable:2 sequence:2 propose:5 maximal:1 fr:4 neighboring:1 relevant:1 achieve:2 inducing:6 scalability:2 ky:2 exploiting:1 convergence:3 optimum:3 sierra:1 object:2 wider:1 illustrate:1 develop:1 stat:5 fixing:1 h3:6 received:1 eq:11 strong:1 soc:1 implemented:2 involves:1 guided:1 radius:1 correct:1 treeshaped:1 implementing:1 require:1 extension:1 exploring:1 hold:1 considered:2 ground:2 wright:1 u3:4 dictionary:14 bickel:1 smallest:1 estimation:2 proc:5 combinatorial:1 label:1 currently:1 council:1 individually:1 largest:1 grouped:2 correctness:1 tool:1 weighted:2 clearly:1 modified:2 normale:1 rather:1 shrinkage:1 publication:1 derived:2 ponce:1 improvement:1 kim:1 sense:1 duarte:2 cnrs:1 relegated:1 willow:4 going:1 pixel:5 arg:2 dual:12 overall:1 classification:1 eldar:1 priori:1 spatial:3 special:1 equal:5 construct:1 never:1 field:1 kw:1 look:1 unsupervised:1 yu:2 icml:4 mosek:2 discrepancy:1 foreground:7 report:8 nonsmooth:2 piecewise:1 others:1 primarily:1 few:2 randomly:1 simultaneously:1 beck:1 replaced:2 algorithmica:1 occlusion:1 microsoft:2 interest:1 evaluation:2 saturated:2 rodolphe:2 light:1 primal:2 behind:1 devoted:1 held:1 regularizers:1 predefined:2 subtrees:1 edge:6 capable:1 necessary:1 respective:1 minw:1 orthogonal:1 indexed:1 iv:1 tree:20 divide:1 initialized:1 re:2 e0:4 overcomplete:1 theoretical:1 cevher:1 fenchel:2 instance:2 soft:1 modeling:3 teboulle:1 contiguous:1 cost:25 applicability:1 addressing:1 subset:5 entry:4 vertex:8 deviation:2 successful:1 too:1 proximal:21 eur:1 siam:2 negahban:1 together:1 continuously:1 w1:1 borwein:1 huang:1 xw0:1 zhao:1 leading:3 return:4 oper:2 account:2 potential:1 de:1 singleton:1 coding:2 coefficient:6 explicitly:1 audibert:1 depends:1 piece:1 try:2 lot:1 linked:2 francis:2 sup:1 start:1 red:1 reached:1 xing:1 contribution:1 collaborative:1 square:6 who:1 efficiently:3 yield:2 identify:1 correspond:1 metaxas:1 disc:1 simultaneous:1 reach:1 inform:1 definition:3 naturally:1 associated:6 proof:2 dataset:3 popular:1 knowledge:2 color:2 improves:2 efron:1 organized:2 cj:2 jenatton:7 back:1 higher:1 supervised:1 restarts:1 formulation:8 evaluated:1 done:1 ritov:1 generality:1 strongly:1 correlation:1 hand:1 nonlinear:1 overlapping:9 ganesh:1 defines:1 effect:1 k22:3 concept:1 ization:1 regularization:15 alternating:1 nonzero:1 illustrated:1 deal:1 encourages:1 uniquely:1 cosine:1 criterion:2 hong:2 demonstrate:1 performs:1 l1:1 cp:5 percent:1 image:13 variational:1 recently:3 polymatroid:2 qp:4 million:1 linking:1 significant:2 refer:1 erieure:1 grid:1 consistency:1 inclusion:1 pointed:1 sastry:1 submodular:3 groenevelt:1 longer:1 similarity:1 gj:8 v0:4 optimizing:1 irrelevant:1 gallo:1 kwk1:1 yi:3 seen:2 minimum:2 greater:1 relaxed:2 additional:3 impose:1 prune:1 subtraction:4 maximize:1 signal:6 ii:1 multiple:2 smooth:2 technical:8 match:2 characterized:1 faster:1 bach:7 segmented:1 lin:1 equally:1 ravikumar:1 prediction:1 scalable:1 variant:1 regression:4 relabel:1 vision:1 essentially:2 involving:2 arxiv:3 iteration:2 represent:3 kernel:1 hochbaum:2 c1:4 whereas:3 background:8 addition:1 addressed:2 laboratoire:1 source:4 crucial:1 rest:1 unlike:1 flow:54 jordan:1 call:4 structural:1 leverage:1 yang:1 canadian:1 iii:1 relaxes:1 wn:1 iterate:1 hastie:1 identified:1 lasso:4 pesquet:1 reduce:1 idea:2 t0:2 whether:1 effort:2 penalty:5 reformulated:1 useful:1 involve:1 amount:1 transforms:1 tsybakov:1 simplest:1 reduced:1 generate:1 http:2 percentage:1 canonical:3 sign:4 estimated:3 disjoint:3 tibshirani:1 discrete:1 group:40 key:1 demonstrating:1 monitor:1 v1:3 graph:29 subgradient:1 sum:13 run:4 angle:1 inverse:2 powerful:1 baraniuk:2 extends:1 family:2 catholic:1 patch:6 bound:4 guaranteed:1 simplification:2 tackled:1 display:1 quadratic:11 adapted:1 precisely:1 constraint:6 constrain:1 software:1 flat:2 grigoriadis:1 calling:1 u1:4 speed:4 argument:1 min:18 optimality:2 pruned:1 separable:1 structured:17 developing:1 according:1 combination:4 ball:1 conjugate:1 smaller:3 across:1 wi:2 g3:6 explained:2 restricted:1 xwi:2 know:1 flip:1 sprechmann:1 end:3 available:1 operation:4 rewritten:1 kwg:1 hierarchical:12 disjoints:1 appropriate:1 generic:2 rp:17 knapsack:1 original:2 denotes:1 remaining:1 top:1 xw:4 exploit:1 giving:1 k1:3 uj:8 conquer:1 build:2 classical:1 objective:2 g0:2 already:1 quantity:2 parametric:4 strategy:1 gradient:4 subspace:1 distance:4 thank:1 sci:1 capacity:17 w0:9 topic:2 trivial:1 length:1 index:6 relationship:2 minimizing:4 difficult:2 mostly:1 potentially:2 favorably:1 negative:6 implementation:1 design:1 proper:1 anal:1 contributed:1 upper:1 markov:1 arc:24 enabling:1 descent:1 extended:2 team:4 frame:1 discovered:1 rn:5 y1:1 tarjan:2 introduced:2 namely:1 cast:2 c3:4 connection:1 louvain:1 learned:1 hour:1 nip:4 address:2 bar:1 usually:1 pattern:4 below:1 regime:1 sparsity:13 program:2 kek1:1 built:1 interpretability:1 max:14 video:3 wainwright:1 overlap:3 natural:4 warm:1 regularized:1 scheme:1 improve:2 julien:2 conic:1 prior:1 literature:3 sg:9 relative:4 embedded:3 loss:1 mixed:1 interesting:1 suggestion:1 proven:3 versus:2 validation:1 h2:6 xp:1 thresholding:2 share:2 row:3 penalized:1 supported:1 formal:2 weaker:1 allow:1 johnstone:1 wide:1 face:2 absolute:1 sparse:21 ghz:1 depth:1 dimension:2 valid:1 evaluating:1 lett:1 computes:1 author:1 made:1 simplified:3 excess:1 pruning:2 selector:1 keep:1 dealing:1 global:1 incoming:1 mairal:4 conservation:2 thep:1 spectrum:1 continuous:1 iterative:1 ku:2 channel:1 learn:2 robust:1 broadening:1 european:1 protocol:1 whole:1 noise:2 allowed:1 x1:1 en:2 scattered:1 combettes:1 xh:1 comput:2 candidate:1 removing:1 specific:3 covariate:1 maxi:1 sensing:1 admits:1 exists:4 burden:1 adding:2 magnitude:1 subtree:1 push:1 gap:5 suited:1 cherkassky:1 likely:1 ramirez:1 expressed:1 saturating:1 g2:9 scalar:2 partially:1 u2:4 applies:1 springer:2 corresponds:1 truth:2 satisfies:2 lewis:1 acm:1 hedge:1 obozinski:6 ma:1 sarily:1 ann:3 lipschitz:2 feasible:3 change:1 experimentally:1 included:1 infinite:5 except:1 specifically:1 uniformly:1 operates:1 denoising:3 lemma:1 called:2 duality:6 accepted:1 experimental:1 select:1 guillaume:2 people:1 latter:1 bioinformatics:1 accelerated:2 outgoing:1 tested:1
3,274
3,966
Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models Han Liu Kathryn Roeder Larry Wasserman Carnegie Mellon University Pittsburgh, PA 15213 Abstract A challenging problem in estimating high-dimensional graphical models is to choose the regularization parameter in a data-dependent way. The standard techniques include K-fold cross-validation (K-CV), Akaike information criterion (AIC), and Bayesian information criterion (BIC). Though these methods work well for low-dimensional problems, they are not suitable in high dimensional settings. In this paper, we present StARS: a new stability-based method for choosing the regularization parameter in high dimensional inference for undirected graphs. The method has a clear interpretation: we use the least amount of regularization that simultaneously makes a graph sparse and replicable under random sampling. This interpretation requires essentially no conditions. Under mild conditions, we show that StARS is partially sparsistent in terms of graph estimation: i.e. with high probability, all the true edges will be included in the selected model even when the graph size diverges with the sample size. Empirically, the performance of StARS is compared with the state-of-the-art model selection procedures, including K-CV, AIC, and BIC, on both synthetic data and a real microarray dataset. StARS outperforms all these competing procedures. 1 Introduction Undirected graphical models have emerged as a useful tool because they allow for a stochastic description of complex associations in high-dimensional data. For example, biological processes in a cell lead to complex interactions among gene products. It is of interest to determine which features of the system are conditionally independent. Such problems require us to infer an undirected graph from i.i.d. observations. Each node in this graph corresponds to a random variable and the existence of an edge between a pair of nodes represent their conditional independence relationship. Gaussian graphical models [4, 23, 5, 9] are by far the most popular approach for learning high dimensional undirected graph structures. Under the Gaussian assumption, the graph can be estimated using the sparsity pattern of the inverse covariance matrix. If two variables are conditionally independent, the corresponding element of the inverse covariance matrix is zero. In many applications, estimating the the inverse covariance matrix is statistically challenging because the number of features measured may be much larger than the number of collected samples. To handle this challenge, the graphical lasso or glasso [7, 24, 2] is rapidly becoming a popular method for estimating sparse undirected graphs. To use this method, however, the user must specify a regularization parameter ? that controls the sparsity of the graph. The choice of ? is critical since different ??s may lead to different scientific conclusions of the statistical inference. Other methods for estimating high dimensional graphs include [11, 14, 10]. They also require the user to specify a regularization parameter. The standard methods for choosing the regularization parameter are AIC [1], BIC [19] and cross validation [6]. Though these methods have good theoretical properties in low dimensions, they are not suitable for high dimensional problems. In regression, cross-validation has been shown to overfit the data [22]. Likewise, AIC and BIC tend to perform poorly when the dimension is large relative to the sample size. Our simulations confirm that these methods perform poorly when used with glasso. 1 A new approach to model selection, based on model stability, has recently generated some interest in the literature [8]. The idea, as we develop it, is based on subsampling [15] and builds on the approach of Meinshausen and B?uhlmann [12]. We draw many random subsamples and construct a graph from each subsample (unlike K-fold cross-validation, these subsamples are overlapping). We choose the regularization parameter so that the obtained graph is sparse and there is not too much variability across subsamples. More precisely, we start with a large regularization which corresponds to an empty, and hence highly stable, graph. We gradually reduce the amount of regularization until there is a small but acceptable amount of variability of the graph across subsamples. In other words, we regularize to the point that we control the dissonance between graphs. The procedure is named StARS: Stability Approach to Regularization Selection. We study the performance of StARS by simulations and theoretical analysis in Sections 4 and 5. Although we focus here on graphical models, StARS is quite general and can be adapted to other settings including regression, classification, clustering, and dimensionality reduction. In the context of clustering, results of stability methods have been mixed. Weaknesses of stability have been shown in [3]. However, the approach was successful for density-based clustering [17]. For graph selection, Meinshausen and B?uhlmann [12] also used a stability criterion; however, their approach differs from StARS in its fundamental conception. They use subsampling to produce a new and more stable regularization path then select a regularization parameter from this newly created path, whereas we propose to use subsampling to directly select one regularization parameter from the original path. Our aim is to ensure that the selected graph is sparse, but inclusive, while they aim to control the familywise type I errors. As a consequence, their goal is contrary to ours: instead of selecting a larger graph that contains the true graph, they try to select a smaller graph that is contained in the true graph. As we will discuss in Section 3, in specific application domains like gene regulatory network analysis, our goal for graph selection is more natural. 2 Estimating a High-dimensional Undirected Graph ( )T Let X = X(1), . . . , X(p) be a random vector with distribution P . The undirected graph G = (V, E) associated with P has vertices V = {X(1), . . . , X(p)} and a set of edges E corresponding to pairs of vertices. In this paper, we also interchangeably use E to denote the adjacency matrix of the graph G. The edge corresponding to X(j) and X(k) is absent if X(j) and X(k) are conditionally independent given the other coordinates of X. The graph estimation problem is to infer E from i.i.d. observed data X1 , . . . , Xn where Xi = (Xi (1), . . . , Xi (p))T . Suppose now that P is Gaussian with mean vector ? and covariance matrix ?. Then the edge corresponding to X(j) and X(k) is absent if and only if ?jk = 0 where ? = ??1 . Hence, to estimate the graph we only need to estimate the sparsity pattern of ?. When p could diverge with n, estimating ? is difficult. A popular approach is the graphical lasso or glasso [7, 24, 2]. Using glasso, we estimate ? as follows: Ignoring the log-likelihood (after maximizing over ?) can be ( constants, ) b b is the sample covariance matrix. With a positive written as ?(?) = log |?| ? trace ?? where ? b regularization parameter ?, the glasso estimator ?(?) is obtained by minimizing the regularized negative log-likelihood { } b ?(?) = arg min ??(?) + ?||?||1 (1) ??0 ? b where ||?||1 = j,k |?jk | is the elementwise ?1 -norm of ?. The estimated graph G(?) = b b b (V, E(?)) is then easily obtained from ?(?): for i ?= j, an edge (i, j) ? E(?) if and only if b the corresponding entry in ?(?) is nonzero. Friedman et al. [7] give a fast algorithm for calculating b ?(?) over a grid of ?s ranging from small to large. By taking advantage of the fact that the objective function in (1) is convex, their algorithm iteratively estimates a single row (and column) of ? b in each iteration by solving a lasso regression [21]. The resulting regularization path ?(?) for all ?s has been shown to have excellent theoretical properties [18, 16]. For example, Ravikumar et al. [16] show that, if the regularization parameter ? satisfies a certain rate, the corresponding estimator b ?(?) could recover the true graph with high probability. However, these types of results are either asymptotic or non-asymptotic but with very large constants. They are not practical enough to guide the choice of the regularization parameter ? in finite-sample settings. 2 3 Regularization Selection b In Equation (1), the choice of ? is critical because ? controls the sparsity level of G(?). Larger values of ? tend to yield sparser graphs and smaller values of ? yield denser graphs. It is convenient to define ? = 1/? so that small ? corresponds to a more sparse graph. In particular, ? = 0 corresponds to the empty graph with no edges. Given a grid of regularization parameters Gn = {?1 , . . . , ?K }, b ? Gn , such that the true our goal of graph regularization parameter selection is to choose one ? b b graph E is contained in E(?) with high probability. In other words, we want to ?overselect? instead of ?underselect?. Such a choice is motivated by application problems like gene regulatory networks reconstruction, in which we aim to study the interactions of many genes. For these types of studies, we tolerant some false positives but not false negatives. Specifically, it is acceptable that an edge presents but the two genes corresponding to this edge do not really interact with each other. Such false positives can generally be screened out by more fine-tuned downstream biological experiments. However, if one important interaction edge is omitted at the beginning, it?s very difficult for us to re-discovery it by follow-up analysis. There is also a tradeoff: we want to select a denser graph which contains the true graph with high probability. At the same time, we want the graph to be as sparse as possible so that important information will not be buried by massive false positives. Based on this rationale, an ?underselect? method, like the approach of Meinshausen and B?uhlmann[12], does not really fit our goal. In the following, we start with an overview of several state-of-the-art regularization parameter selection methods for graphs. We then introduce our new StARS approach. 3.1 Existing Methods b The regularization parameter is often chosen using AIC or BIC. Let ?(?) denote the estimator corresponding to ?. Let d(?) denote the degree of freedom (or the effective number of free pa( ) b rameters) of the corresponding Gaussian model. AIC chooses ? to minimize ?2? ?(?) + 2d(?) ( ) b and BIC chooses ? to minimize ?2? ?(?) + d(?) ? log n. The usual theoretical justification for these methods assumes that the dimension p is fixed as n increases; however, in the case where p > n this justification is not applicable. In fact, it?s even not straightforward how to estimate the degree of freedom d(?) when p is larger than n . A common practice is to calculate d(?) as b d(?) = m(?)(m(?) ? 1)/2 + p where m(?) denotes the number of nonzero elements of ?(?). As we will see in our experiments, AIC and BIC tend to select overly dense graphs in high dimensions. Another popular method is K-fold cross-validation (K-CV). For this procedure the data is partitioned into K subsets. Of the K subsets one is retained as the validation data, and the remaining K ? 1 ones are used as training data. For each ? ? Gn , we estimate a graph on the K ? 1 training sets and evaluate the negative log-likelihood on the retained validation set. The results are averaged over all K folds to obtain a single CV score. We then choose ? to minimize the CV score over he whole grid Gn . In regression, cross-validation has been shown to overfit [22]. Our experiments will confirm this is true for graph estimation as well. 3.2 StARS: Stability Approach to Regularization Selection The StARS approach is to choose ? based on stability. When ? is 0, the graph is empty and two datasets from P would both yield the same graph. As we increase ?, the variability of the graph increases and hence the stability decreases. We increase ? just until the point where the graph becomes variable as measured by the stability. StARS leads to a concrete rule for choosing ?. Let b = b(n) be such that 1 < b(n) (< )n. We draw N random subsamples S1 , . . . , S(N) from X1 , . . . , Xn , each of size b. There are nb such subsamples. Theoretically one uses all nb subsamples. However, Politis et al. [15] show that it suffices in practice to choose a large number N of subsamples at random. Note that, unlike bootstrapping [6], each subsample is drawn without replacement. For each ? ? Gn , we construct a graph using the glasso for each subsample. This b b (?), . . . , E b b (?). Focus for now on one edge (s, t) and one results in N estimated edge matrices E 1 N value of ?. Let ? ? (?) denote the glasso algorithm with the regularization parameter ?. For any ? ? subsample Sj let ?st (Sj ) = 1 if the algorithm puts an edge and ?st (Sj ) = 0 if the algorithm does b ? b not put an edge between (s, t). Define ?st (?) = P(?st (X1 , . . . , Xb ) = 1). To estimate ?st (?), we N ? 1 b use a U-statistic of order b, namely, ?bst (?) = ? ? (Sj ). N j=1 st 3 b b b b b b Now define the parameter ?st (?) = 2?st (?)(1 ? ?st (?)) and let ?bst (?) = 2?bst (?)(1 ? ?bst (?)) be b its estimate. Then ?st (?), in addition to being twice the variance of the Bernoulli indicator of the edge (s, t), has the following nice interpretation: For each pair of graphs, we can ask how often they b disagree on the presence of the edge: ?st (?) is the fraction of times they disagree. For ? ? Gn , we b b regard ?st (?) as a measure of instability of the edge across subsamples, with 0 ? ?st (?) ? 1/2. ( ) ? bb p b b (?) = Define the total instability by averaging over all edges: D s<t ?st / 2 . Clearly on the b b (0) = 0, and D b b (?) generally will increase as ? increases. However, when ? gets very boundary D b b (?) will begin to decrease. Subsample stability for large, all the graphs will become dense and D large ? is essentially an artifact. We are interested in stability for sparse graphs not dense graphs. b b (?) by defining Db (?) = sup0?t?? D b b (t). Finally, our StARS For this reason we monotonize D { } b s = sup ? : Db (?) ? ? for a specified cut point value ?. approach chooses ? by defining ? It may seem that we have merely replaced the problem of choosing ? with the problem of choosing ?, but ? is an interpretable quantity and we always set a default value ? = 0.05. One thing to note b ?, bD b ?, b depend on the subsampling block size b. Since StARS is based on is that all quantities E, subsampling, the effective sample size for estimating the selected graph is b instead of n. Compared with methods like BIC and AIC which fully utilize all n data points. StARS has some efficiency loss in low dimensions. However, in high dimensional settings, the gain of StARS on better graph selection significantly dominate this efficiency loss. This fact is confirmed by our experiments. 4 Theoretical Properties The StARS procedure is quite general and can be applied with any graph estimation algorithms. Here, we provide its theoretical properties. We start with a key theorem which establishes the rates of convergence of the estimated stability quantities to their population means. We then discuss the implication of this theorem on general gaph regularization selection problems. Let ? be an element in the grid Gn = {?1 , . . . , ?K } where K is a polynomial of n. We denote b b b b (?)). The quantity ?bst b b (?) is an estimate of Db (?) = E(D (?) is an estimate of ?st (?) and D Db (?). Standard U -statistic theory guarantees that these estimates have good uniform convergence properties to their population quantities: Theorem 1. (Uniform Concentration) The following statements hold with no assumptions on P . For any ? ? (0, 1), with probability at least 1 ? ?, we have ? 18b (2 log p + log(2/?)) b b ?? ? Gn , max |?bst (?) ? ?st (?)| ? (2) , s<t n ? b b (?) ? Db (?)| ? 18b (log K + 4 log p + log (1/?)) . max |D (3) ??Gn n b Proof. Note that ?bst (?) is a U -statistic of order b. Hence, by Hoeffding?s inequality for U -statistics [20], we have, for any ? > 0, ( ) P(|?bb (?) ? ?b (?)| > ?) ? 2 exp ?2n?2 /b . (4) st st b b Now ?bst (?) is just a function of the U -statistic ?bst (?). Note that b b b b b b |?bst (?) ? ?st (?)| = 2|?bst (?)(1 ? ?bst (?)) ? ?st (?)(1 ? ?st (?))| ( b )2 ( b )2 b b b b = 2|?st (?) ? ?st (?) ? ?st (?) + ?st (?) | ( b )2 ( b )2 b b ? 2|?bst (?) ? ?st (?)| + 2| ?bst (?) ? ?st (?) | b b b b b b ? 2|?bst (?) ? ?st (?)| + 2|(?bst (?) ? ?st (?))(?bst (?) + ?st (?))| b b b b b b ? 2|? (?) ? ? (?)| + 4|? (?) ? ? (?)| (5) (6) (7) (8) (9) st st st st b b b = 6|?st (?) ? ?st (?)|, (10) b b b b b b we have |?st (?) ? ?st (?)| ? 6|?st (?) ? ?st (?)|. Using (4) and the union bound over all the edges, we obtain: for each ? ? Gn , ( ) P(max |?bb (?) ? ? b (?)| > 6?) ? 2p2 exp ?2n?2 /b . (11) s<t st st 4 Using two union bound arguments over the K values of ? and all the p(p ? 1)/2 edges, we have: ( ) p(p ? 1) b b b P max |Db (?) ? Db (?)| ? ? ? |Gn | ? ? P(max |?bst (?) ? ?st (?)| > ?) (12) s<t ??Gn 2 ( ) ? K ? p4 ? exp ?n?2 /(18b) . (13) Equations (2) and (3) follow directly from (11) and the above exponential probability inequality. Theorem 1 allows us to explicitly characterize the high-dimensional scaling of the sample size n, dimensionality p, subsampling block size b, and the grid size K. More specifically, we get n P b b (?) ? Db (?)| ? ( ) ? ? =? max |D 0 (14) 4 ??G b log np K n ? by setting ? = 1/n in Equation (3). From (14), let c1 , c2 be arbitrary positive constants, if b = c1 n, b b (?) still converges K = nc2 , and p ? exp (n? ) for some ? < 1/2, the estimated total stability D to its mean Db (?) uniformly over the whole grid Gn . We now discuss the implication of Theorem 1 to graph regularization selection problems. Due to the generality of StARS, we provide theoretical justifications for a whole family of graph estimation b b (?) procedures satisfying certain conditions. Let ? be a graph estimation procedure. We denote E as the estimated edge set using the regularization parameter ? by applying ? on a subsampled dataset with block size b. To establish graph selection result, we start with two technical assumptions: (A1) ??o ? Gn , such that max???o ???Gn Db (?) ? ?/2 for large enough n. ( ) b b (?) ? 1 as n ? ?. (A2) For any ? ? Gn and ? ? ?o , P E ? E Note that ?o here depends on the sample size n and does not have to be unique. To understand the above conditions, (A1) assumes that there exists a threshold ?o ? Gn , such that the population quantity Db (?) is small for all ? ? ?o . (A2) requires that all estimated graphs using regularization parameters ? ? ?o contain the true graph with high probability. Both assumptions are mild and should be satisfied by most graph estimation algorithm with reasonable behaviors. More detailed analysis on how glasso satisfies (A1) and (A2) will be provided in the full version of this paper. There is a tradeoff on the design of the subsampling block size b . To make (A2) hold, we require b b to be large. However, to make ? Db (?) concentrate to Db (?) fast, we require b to be small. Our suggested value is b = ?10 n?, which balances both the theoretical and empirical performance well. The next theorem provides the graph selection performance of StARS: Theorem 2. (Partial Sparsistency): Let ? to be a graph estimation algorithm. We assume (A1) and ? b s ? Gn (A2) hold for ? using b = ?10 n? and |Gn | = K = nc1 for some constant c1 > 0. Let ? be the selected regularization parameter using the StARS procedure with a constant cutting point ?. Then, if p ? exp (n? ) for some ? < 1/2, we have ( ) b b (? b s ) ? 1 as n ? ?. P E?E (15) b b (?) ? Db (?)| ? ?/2. The scaling of Proof. We define An to be the event that max??Gn |D n, K, b, p in the theorem satisfies the L.H.S. of (14), which implies that P(An ) ? 1 as n ? ?. Using (A1), we know that, on An , max ???o ???Gn b b (?) ? max |D b b (?) ? Db (?)| + D ??Gn max ???o ???Gn Db (?) ? ?. (16) b s ? ?o . The result follows by applying (A2) and a union bound. This implies that, on An , ? 5 Experimental Results We now provide empirical evidence to illustrate the usefulness of StARS and compare it with several state-of-the-art competitors, including 10-fold cross-validation (K-CV), BIC, and AIC. For StARS ? we always use subsampling block size b(n) = ?10 ? n] and set the cut point ? = 0.05. We first 5 quantitatively evaluate these methods on two types of synthetic datasets, where the true graphs are known. We then illustrate StARS on a microarray dataset that records the gene expression levels from immortalized B cells of human subjects. On all high dimensional synthetic datasets, StARS significantly outperforms its competitors. On the microarray dataset, StARS obtains a remarkably simple graph while all competing methods select what appear to be overly dense graphs. 5.1 Synthetic Data To quantitatively evaluate the graph estimation performance, we adapt the criteria including precision, recall, and F1 -score from the information retrieval literature. Let G = (V, E) be a pb = (V, E) b be an estimated graph. We define precision = |E b ? E|/|E|, b dimensional graph and let G b recall = |E ? E|/|E|, and F1 -score = 2 ? precision ? recall/(precision + recall). In other words, Precision is the number of correctly estimated edges divided by the total number of edges in the estimated graph; recall is the number of correctly estimated edges divided by the total number of edges in the true graph; the F1 -score can be viewed as a weighted average of the precision and recall, where an F1 -score reaches its best value at 1 and worst score at 0. On the synthetic data where we know the true graphs, we also compare the previous methods with an oracle procedure which selects the optimal regularization parameter by minimizing the total number of different edges between the estimated and true graphs along the full regularization path. Since this oracle procedure requires the knowledge of the truth graph, it is not a practical method. We only present it here to calibrate the inherent challenge of each simulated scenario. To make the comparison fair, once the regularization parameters are selected, ? we estimate the oracle and StARS graphs only based on a subsampled dataset with size b(n) = ?10 n?. In contrast, the K-CV, BIC, and AIC graphs are estimated using the full dataset. More details about this issue were discussed in Section 3. We generate data from sparse Gaussian graphs, neighborhood graphs and hub graphs, which mimic characteristics of real-wolrd biological networks. The mean is set to be zero and the covariance matrix ? = ??1 . For both graphs, the diagonal elements of ? are set to be one. More specifically: 1. Neighborhood graph: We first uniformly sample y1 , . . . , yn from a unit square. We then set (? )?1 ( ) ?ij = ?ji = ? with probability 2? exp ?4?yi ? yj ?2 . All the rest ?ij are set to be zero. The number of nonzero off-diagonal elements of each row or column is restricted to be smaller than ?1/??. In this paper, ? is set to be 0.245. 2. Hub graph: The rows/columns are partitioned into J equally-sized disjoint groups: V1 ? V2 . . . ? VJ = {1, . . . , p}, each group is associated with a ?pivotal? row k. Let |V1 | = s. We set ?ik = ?ki = ? for i ? Vk and ?ik = ?ki = 0 otherwise. In our experiment, J = ?p/s?, k = 1, s + 1, 2s + 1, . . ., and we always set ? = 1/(s + 1) with s = 20. We generate synthetic datasets in both low-dimensional (n = 800, p = 40) and high-dimensional (n = 400, p = 100) settings. Table 1 provides comparisons of all methods, where we repeat the experiments 100 times and report the averaged precision, recall, F1 -score with their standard errors. Table 1: Quantitative comparison of different methods on the datasets from the neighborhood and hub graphs. Neighborhood graph: n =800, p=40 Methods Oracle StARS K-CV BIC AIC Methods Oracle StARS K-CV BIC AIC Precision Recall 0.9222 (0.05) 0.9070 (0.07) 0.7204 (0.08) 0.9530 (0.05) 0.1394 (0.02) 1.0000 (0.00) 0.9738 (0.03) 0.9948 (0.02) 0.8696 (0.11) 0.9996 (0.01) Hub graph: n =800, p=40 Neighborhood graph: n=400, p =100 F1 -score 0.9119 (0.04) 0.8171 (0.05) 0.2440 (0.04) 0.9839 (0.01) 0.9236 (0.07) Precision Recall F1 -score 0.9793 (0.01) 0.4377 (0.02) 0.2383 (0.09) 0.4879 (0.05) 0.2522 (0.09) 1.0000 (0.00) 1.0000 (0.00) 1.0000 (0.00) 1.0000 (0.00) 1.0000 (0.00) 0.9895 (0.01) 0.6086 (0.02) 0.3769 (0.01) 0.6542 (0.05) 0.3951 (0.00) Precision Recall 0.7473 (0.09) 0.8001 (0.06) 0.6366 (0.07) 0.8718 (0.06) 0.1383 (0.01) 1.0000 (0.00) 0.1796 (0.11) 1.0000 (0.00) 0.1279 (0.00) 1.0000 (0.00) Hub graph: n=400, p =100 Precision 0.8976 (0.02) 0.4572 (0.01) 0.1574 (0.01) 0.2155 (0.00) 0.1676 (0.00) F1 -score 0.7672 (0.07) 0.7352 (0.07) 0.2428 (0.01) 0.2933 (0.13) 0.2268 (0.01) Recall F1 -score 1.0000 (0.00) 1.0000 (0.00) 1.0000 (0.00) 1.0000 (0.00) 1.0000 (0.00) 0.9459 (0.01) 0.6274 (0.01) 0.2719 (0.00) 0.3545 (0.01) 0.2871 (0.00) For low-dimensional settings where n ? p, the BIC criterion is very competitive and performs the best among all the methods. In high dimensional settings, however, StARS clearly outperforms all 6 the competing methods for both neighborhood and hub graphs. This is consistent with our theory. At first sight, it might be surprising that for data from low-dimensional neighborhood graphs, BIC and AIC even outperform the oracle procedure! This is due to the fact that both BIC and AIC graphs are estimated using all the n = 800 data points, ? while the oracle graph is estimated using only the subsampled dataset with size b(n) = ?10 ? n? = 282. Direct usage of the full sample is an advantage of model selection methods that take the general form of BIC and AIC. In high dimensions, however, we see that even with this advantage, StARS clearly outperforms BIC and AIC. The estimated graphs for different methods in the setting n = 400, p = 100 are provided in Figures 1 and 2, from which we see that the StARS graph is almost as good as the oracle, while the K-CV, BIC, and AIC graphs are overly too dense. (a) True graph (b) Oracle graph (c) StARS graph (d) K-CV graph (e) BIC graph (f) AIC graph Figure 1: Comparison of different methods on the data from the neighborhood graphs (n = 400, p = 100). 5.2 Microarray Data We apply StARS to a dataset based on Affymetrix GeneChip microarrays for the gene expression levels from immortalized B cells of human subjects. The sample size is n = 294. The expression levels for each array are pre-processed by log-transformation and standardization as in [13]. Using a sub-pathway subset of 324 correlated genes, we study the estimated graphs obtained from each method under investigation. The StARS and BIC graphs are provided in Figure 3. We see that the StARS graph is remarkably simple and informative, exhibiting some cliques and hub genes. In contrast, the BIC graph is very dense and possible useful association information is buried in the large number of estimated edges. The selected graphs using AIC and K-CV are even more dense than the BIC graph and will be reported elsewhere. A full treatment of the biological implication of these two graphs validated by enrichment analysis will be provided in the full version of this paper. 6 Conclusions The problem of estimating structure in high dimensions is very challenging. Casting the problem in the context of a regularized optimization has led to some success, but the choice of the regularization parameter is critical. We present a new method, StARS, for choosing this parameter in high dimensional inference for undirected graphs. Like Meinshausen and B?uhlmann?s stability selection approach [12], our method makes use of subsampling, but it differs substantially from their 7 (a) True graph (b) Oracle graph (c) StARS graph (d) K-CV graph (e) BIC graph (f) AIC graph Figure 2: Comparison of different methods on the data from the hub graphs (n = 400, p = 100). (a) StARS graph (b) BIC graph Figure 3: Microarray data example. The StARS graph is more informative graph than the BIC graph. approach in both implementation and goals. For graphical models, we choose the regularization parameter directly based on the edge stability. Under mild conditions, StARS is partially sparsistent. However, even without these conditions, StARS has a simple interpretation: we use the least amount of regularization that simultaneously makes a graph sparse and replicable under random sampling. Empirically, we show that StARS works significantly better than existing techniques on both synthetic and microarray datasets. Although we focus here on graphical models, our new method is generally applicable to many problems that involve estimating structure, including regression, classification, density estimation, clustering, and dimensionality reduction. 8 References [1] Hirotsugu Akaike. Information theory and an extension of the maximum likelihood principle. Second International Symposium on Information Theory, (2):267?281, 1973. [2] Onureena Banerjee, Laurent El Ghaoui, and Alexandre d?Aspremont. Model selection through sparse maximum likelihood estimation. Journal of Machine Learning Research, 9:485?516, March 2008. [3] Shai Ben-david, Ulrike Von Luxburg, and David Pal. A sober look at clustering stability. In Proceedings of the Conference of Learning Theory, pages 5?19. Springer, 2006. [4] Arthur P. Dempster. Covariance selection. Biometrics, 28:157?175, 1972. [5] David Edwards. Introduction to graphical modelling. Springer-Verlag Inc, 1995. [6] Bradley Efron. The jackknife, the bootstrap and other resampling plans. SIAM [Society for Industrial and Applied Mathematics], 1982. [7] Jerome H. Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432?441, 2007. [8] Tilman Lange, Volker Roth, Mikio L. Braun, and Joachim M. Buhmann. Stability-based validation of clustering solutions. Neural Computation, 16(6):1299?1323, 2004. [9] Steffen L. Lauritzen. Graphical Models. Oxford University Press, 1996. [10] Han Liu, John Lafferty, and J. Wainwright. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:2295?2328, 2009. [11] Nicolai Meinshausen and Peter B?uhlmann. High dimensional graphs and variable selection with the Lasso. The Annals of Statistics, 34:1436?1462, 2006. [12] Nicolai Meinshausen and Peter B?uhlmann. Stability selection. To Appear in Journal of the Royal Statistical Society, Series B, Methodological, 2010. [13] Renuka R. Nayak, Michael Kearns, Richard S. Spielman, and Vivian G. Cheung. Coexpression network based on natural variation in human gene expression reveals gene interactions and functions. Genome Research, 19(11):1953?1962, November 2009. [14] Jie Peng, Pei Wang, Nengfeng Zhou, and Ji Zhu. Partial correlation estimation by joint sparse regression models. Journal of the American Statistical Association, 104(486):735?746, 2009. [15] Dimitris N. Politis, Joseph P. Romano, and Michael Wolf. Subsampling (Springer Series in Statistics). Springer, 1 edition, August 1999. [16] Pradeep Ravikumar, Martin Wainwright, Garvesh Raskutti, and Bin Yu. Model selection in Gaussian graphical models: High-dimensional consistency of ?1 -regularized MLE. In Advances in Neural Information Processing Systems 22, Cambridge, MA, 2009. MIT Press. [17] Alessandro Rinaldo and Larry Wasserman. Generalized density clustering. arXiv/0907.3454, 2009. [18] Adam J. Rothman, Peter J. Bickel, Elizaveta Levina, and Ji Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494?515, 2008. [19] Gideon Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6:461?464, 1978. [20] Robert J. Serfling. Approximation theorems of mathematical statistics. John Wiley and Sons, 1980. [21] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, Methodological, 58:267?288, 1996. [22] Larry Wasserman and Kathryn Roeder. High dimensional variable selection. Annals of statistics, 37(5A):2178?2201, January 2009. [23] Joe Whittaker. Graphical Models in Applied Multivariate Statistics. Wiley, 1990. [24] Ming Yuan and Yi Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19?35, 2007. 9
3966 |@word mild:3 version:2 polynomial:1 norm:1 simulation:2 covariance:9 reduction:2 liu:2 contains:2 score:12 selecting:1 series:3 tuned:1 ours:1 nonparanormal:1 outperforms:4 existing:2 affymetrix:1 bradley:1 nicolai:2 surprising:1 must:1 written:1 bd:1 john:2 informative:2 interpretable:1 resampling:1 selected:6 beginning:1 record:1 provides:2 node:2 mathematical:1 along:1 c2:1 direct:1 become:1 symposium:1 ik:2 yuan:1 pathway:1 introduce:1 theoretically:1 peng:1 behavior:1 steffen:1 ming:1 becomes:1 begin:1 estimating:10 provided:4 biostatistics:1 what:1 substantially:1 transformation:1 bootstrapping:1 guarantee:1 quantitative:1 braun:1 biometrika:1 control:4 bst:18 unit:1 appear:2 yn:1 positive:5 consequence:1 oxford:1 laurent:1 path:5 becoming:1 might:1 twice:1 meinshausen:6 challenging:3 statistically:1 averaged:2 practical:2 unique:1 yj:1 practice:2 block:5 union:3 differs:2 bootstrap:1 procedure:11 empirical:2 significantly:3 convenient:1 word:3 pre:1 get:2 selection:25 nb:2 context:2 put:2 instability:2 applying:2 sparsistent:2 roth:1 maximizing:1 straightforward:1 convex:1 wasserman:3 estimator:3 rule:1 array:1 dominate:1 regularize:1 stability:20 handle:1 population:3 coordinate:1 justification:3 variation:1 annals:3 suppose:1 user:2 massive:1 kathryn:2 akaike:2 us:1 pa:2 element:5 satisfying:1 jk:2 cut:2 observed:1 wang:1 worst:1 calculate:1 nc1:1 decrease:2 alessandro:1 dempster:1 depend:1 solving:1 efficiency:2 easily:1 joint:1 fast:2 effective:2 choosing:6 neighborhood:8 quite:2 emerged:1 larger:4 onureena:1 denser:2 otherwise:1 statistic:12 subsamples:9 advantage:3 propose:1 reconstruction:1 interaction:4 product:1 p4:1 rapidly:1 poorly:2 description:1 convergence:2 empty:3 diverges:1 produce:1 adam:1 converges:1 ben:1 illustrate:2 develop:1 measured:2 ij:2 lauritzen:1 edward:1 p2:1 implies:2 exhibiting:1 concentrate:1 stochastic:1 human:3 larry:3 sober:1 adjacency:1 bin:1 require:4 suffices:1 f1:9 really:2 investigation:1 biological:4 rothman:1 extension:1 hold:3 exp:6 bickel:1 a2:6 omitted:1 estimation:16 applicable:2 uhlmann:6 schwarz:1 establishes:1 tool:1 weighted:1 immortalized:2 clearly:3 mit:1 gaussian:7 always:3 aim:3 sight:1 zhou:1 shrinkage:1 volker:1 casting:1 validated:1 focus:3 joachim:1 vk:1 methodological:2 bernoulli:1 likelihood:5 modelling:1 contrast:2 industrial:1 inference:3 roeder:2 dependent:1 el:1 familywise:1 buried:2 interested:1 selects:1 arg:1 among:2 classification:2 issue:1 plan:1 art:3 construct:2 once:1 dissonance:1 sampling:2 look:1 yu:1 mimic:1 np:1 report:1 quantitatively:2 inherent:1 richard:1 simultaneously:2 sparsistency:1 subsampled:3 replaced:1 replacement:1 friedman:2 freedom:2 interest:2 highly:1 weakness:1 pradeep:1 xb:1 implication:3 edge:28 partial:2 arthur:1 biometrics:1 re:1 theoretical:8 column:3 gn:23 calibrate:1 vertex:2 entry:1 subset:3 uniform:2 usefulness:1 successful:1 too:2 pal:1 characterize:1 reported:1 synthetic:7 chooses:3 st:43 density:3 fundamental:1 international:1 siam:1 off:1 diverge:1 michael:2 concrete:1 von:1 satisfied:1 choose:7 hoeffding:1 american:1 star:43 inc:1 explicitly:1 depends:1 try:1 sup:1 ulrike:1 start:4 recover:1 competitive:1 shai:1 nc2:1 minimize:3 square:1 variance:1 characteristic:1 likewise:1 yield:3 bayesian:1 confirmed:1 reach:1 trevor:1 competitor:2 associated:2 proof:2 gain:1 newly:1 dataset:8 treatment:1 popular:4 ask:1 recall:11 knowledge:1 efron:1 dimensionality:3 alexandre:1 follow:2 specify:2 though:2 generality:1 just:2 until:2 overfit:2 jerome:1 correlation:1 overlapping:1 banerjee:1 artifact:1 vivian:1 scientific:1 usage:1 contain:1 true:14 regularization:36 hence:4 nonzero:3 iteratively:1 conditionally:3 interchangeably:1 criterion:5 generalized:1 performs:1 ranging:1 recently:1 common:1 garvesh:1 raskutti:1 empirically:2 overview:1 ji:3 association:3 interpretation:4 he:1 elementwise:1 discussed:1 mellon:1 cambridge:1 cv:13 grid:6 mathematics:1 consistency:1 stable:2 han:2 multivariate:1 scenario:1 certain:2 verlag:1 inequality:2 success:1 yi:2 determine:1 full:6 infer:2 technical:1 levina:1 adapt:1 cross:7 retrieval:1 lin:1 divided:2 ravikumar:2 equally:1 mle:1 a1:5 coexpression:1 regression:7 essentially:2 arxiv:1 iteration:1 represent:1 cell:3 c1:3 whereas:1 want:3 fine:1 addition:1 remarkably:2 semiparametric:1 microarray:6 rest:1 unlike:2 subject:2 tend:3 undirected:9 db:16 thing:1 contrary:1 lafferty:1 seem:1 presence:1 conception:1 enough:2 independence:1 bic:25 fit:1 hastie:1 competing:3 lasso:6 reduce:1 idea:1 lange:1 tradeoff:2 microarrays:1 absent:2 motivated:1 expression:4 peter:3 romano:1 jie:1 useful:2 generally:3 clear:1 detailed:1 involve:1 amount:4 processed:1 generate:2 outperform:1 estimated:18 overly:3 correctly:2 disjoint:1 tibshirani:2 carnegie:1 group:2 key:1 threshold:1 pb:1 drawn:1 utilize:1 v1:2 graph:125 downstream:1 fraction:1 merely:1 screened:1 inverse:4 luxburg:1 named:1 family:1 reasonable:1 almost:1 electronic:1 draw:2 acceptable:2 scaling:2 bound:3 ki:2 aic:20 fold:5 oracle:10 adapted:1 precisely:1 inclusive:1 argument:1 min:1 martin:1 jackknife:1 march:1 across:3 smaller:3 son:1 serfling:1 partitioned:2 joseph:1 s1:1 gradually:1 restricted:1 ghaoui:1 invariant:1 equation:3 discus:3 know:2 apply:1 v2:1 existence:1 original:1 assumes:2 clustering:7 ensure:1 include:2 subsampling:10 graphical:15 denotes:1 remaining:1 calculating:1 build:1 establish:1 society:3 objective:1 quantity:6 concentration:1 usual:1 diagonal:2 elizaveta:1 simulated:1 collected:1 reason:1 retained:2 relationship:1 minimizing:2 balance:1 difficult:2 robert:3 statement:1 trace:1 negative:3 design:1 implementation:1 pei:1 perform:2 disagree:2 observation:1 datasets:6 finite:1 november:1 january:1 tilman:1 defining:2 variability:3 y1:1 arbitrary:1 august:1 enrichment:1 david:3 pair:3 namely:1 specified:1 suggested:1 pattern:2 dimitris:1 sparsity:4 challenge:2 gideon:1 including:5 max:11 royal:2 wainwright:2 suitable:2 critical:3 natural:2 event:1 regularized:3 indicator:1 buhmann:1 zhu:2 created:1 aspremont:1 nice:1 literature:2 discovery:1 relative:1 asymptotic:2 glasso:8 fully:1 loss:2 rationale:1 mixed:1 permutation:1 rameters:1 validation:10 degree:2 consistent:1 standardization:1 principle:1 row:4 elsewhere:1 repeat:1 free:1 nengfeng:1 guide:1 allow:1 understand:1 taking:1 sparse:13 regard:1 boundary:1 dimension:8 xn:2 default:1 genome:1 replicable:2 far:1 bb:3 sj:4 obtains:1 cutting:1 gene:11 confirm:2 clique:1 tolerant:1 reveals:1 pittsburgh:1 xi:3 regulatory:2 table:2 ignoring:1 interact:1 excellent:1 complex:2 domain:1 vj:1 dense:7 whole:3 subsample:5 edition:1 fair:1 pivotal:1 x1:3 mikio:1 wiley:2 precision:11 sub:1 exponential:1 theorem:9 specific:1 hub:8 evidence:1 exists:1 joe:1 false:4 sparser:1 led:1 rinaldo:1 contained:2 partially:2 springer:4 corresponds:4 truth:1 satisfies:3 wolf:1 whittaker:1 ma:1 conditional:1 goal:5 viewed:1 sized:1 cheung:1 included:1 specifically:3 uniformly:2 averaging:1 kearns:1 total:5 experimental:1 select:6 spielman:1 evaluate:3 correlated:1
3,275
3,967
Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models Wulfram Gerstner Brain Mind Institute Ecole Polytechnique F?ed?erale de Lausanne 1015 Lausanne EPFL, Switzerland [email protected] Felipe Gerhard Brain Mind Institute Ecole Polytechnique F?ed?erale de Lausanne 1015 Lausanne EPFL, Switzerland [email protected] Abstract Generalized Linear Models (GLMs) are an increasingly popular framework for modeling neural spike trains. They have been linked to the theory of stochastic point processes and researchers have used this relation to assess goodness-of-fit using methods from point-process theory, e.g. the time-rescaling theorem. However, high neural firing rates or coarse discretization lead to a breakdown of the assumptions necessary for this connection. Here, we show how goodness-of-fit tests from point-process theory can still be applied to GLMs by constructing equivalent surrogate point processes out of time-series observations. Furthermore, two additional tests based on thinning and complementing point processes are introduced. They augment the instruments available for checking model adequacy of point processes as well as discretized models. 1 Introduction Action potentials are stereotyped all-or-nothing events, meaning that their amplitude is not considered to transmit any information and only the exact time of occurrence matters. This view suggests to model neurons? responses in the mathematical framework of point processes. An observation is a sequence of spike times and their stochastic properties are captured by a single function, the conditional intensity [1]. For point processes on the time line, several approaches for evaluating goodness-of-fit have been proposed [2]. The most popular in the neuroscientific community has been a test based on the time-rescaling theorem [3]. In practice, neural data is binned such that a spike train is represented as a sequence of spike counts per time bin. Specifically, Generalized Linear Models (GLMs) are built on this representation. Such discretized models of time series have mostly been seen as an approximation to continuous point processes and hence, the time-rescaling theorem was also applied to such models [4, 5, 6, 7, 8]. Here we ask the question whether the time-rescaling theorem can be translated to discrete time. We review the approximations necessary for the transition to discrete time and point out a procedure to create surrogate point processes even when these approximations do not hold (section 2). Two novel tests based on two different operations on point processes are introduced: random thinning and random complementing. These ideas are applied to a series of examples (section 3), followed by a discussion (section 4). 1 Figure 1: Spike train representations. (A) A trace of the membrane potential of a spiking neuron. (B) Information is conveyed in the timings and number of action potentials. This supports the representation of neural activity as a point process in which each spike is assumed to be a singular event in time. (C) When time is divided into large bins, the spike train is represented as a time series of discrete counts. (D) If the bin width is chosen small enough, the spike train corresponds to a binary time series, indicating the presence of a single spike inside a given time bin. 2 2.1 Methods Representations of neural activity We characterize a neuron by its response in terms of trains of action potentials using the theory of point processes (Figures 1A and 1B). An observation consists of a list of times, each denoting the time point of one action potential. Following a common notation [3, 9], let (0, T ] be the time interval of the measurement and {ui } be the set of n event times. The stochastic properties of a point process are characterized by its conditional intensity function ?(t|H(t)), defined as [1]: P [spike in (t, t + ?)|Ht ] , (1) ? where Ht is the history of the stochastic process up to time t and possibly includes other covariates of interest. For fitting and evaluating different parameter sets of the conditional intensity function, a maximum-likelihood approach is followed [10, 11]. The log-likelihood of a point process model is given by [1]: ?(t|Ht ) = lim ??0 log L(point process) = n X log ?(ui |Hui ) ? i=1 Z T ?(t|Ht )dt. (2) 0 One possibility are binning-free models (like renewal processes or other parametric models). Alternatively, ?(t|Ht ) can be modeled as a piece-wise constant function with each piece having length ?. In this case, the history term Ht covers the history up to the time of the left edge of the current bin. Inside the bin, the process locally behaves like a Poisson process with constant rate ?k = ?(tk |Hk ) with tk = ?k and Hk = Htk . Using the number of spikes ck per bin as a representation of the observation, the discretized version of Equation 2 is equivalent to the log-likelihood of a series of Poisson samples (apart from terms that are not dependent on ?(t|Ht )). Hence, for finding the maximumlikelihood solution for the point process, it is equivalently sufficient to maximize the likelihood of such a Poisson regression model. The result of fitting will be a sequence of ?i for each bin, where ?i is the expected number of counts. Since a local Poisson process is assumed within the bins, ?i is related to ?i via: ?i = ?i /?. A complementary approach to the point process framework is to see spike trains as time series, e. g. as a sequence of counts {ci } or binary events {bi } (Figures 1C and 1D). For Poisson-GLMs, a sequence of Poisson-distributed count variables ci is modeled and the linear sum of covariates is linked to the expected mean of the Poisson distribution ?i . Binary time series can be modeled as a sequence of conditionally independent Bernoulli trials with outcomes 0 and 1 and success probabilities {pi }. For Bernoulli-GLMs, the pi s are linked via a non-linear transfer function to a linear sum of covariates. Defined this way, the likelihood P for anpkobserved P sequence bi given a particular model of pi is given by log L(Bernoulli) = k bk log 1?p + k log(1 ? pk ). In the k approximation of ?i  1, ?i becomes approximately pi and the likelihoods of the Bernoulli and Poisson series become equivalent. Moreover, using the same approximation, it is possible to link the Bernoulli series to the conditional intensity function ?(t|Ht ) via ?i ? pi /? . Traditionally, this path was chosen to relate the time series to the theory of point processes and to be able to use goodness-of-fit analyses available for such point processes [9]. 2 A O B time - rescaling O C random thinning complementing O C Oi B original spiketrain B t t original spiketrain t t O original spiketrain t t C ti ti ' ? O (t | H )dt pi t 0 rescaled spiketrain t B Oi complementary process t thinned spiketrain t complemented spiketrain t Figure 2: Overview of goodness-of-fit tests for point-process models. (A) Using the time-rescaling theorem, the time of each spike is rescaled according to the integral of the conditional intensity function. (B) Assuming that the conditional intensity function has a lower limit B, spikes of the original spike train are thinned by keeping a spike only with probability B??1 i . (C) Assuming that the conditional intensity function has an upper limit C, a complementary process ?C = C ? ? can be constructed. Adding samples from this inhomogeneous Poisson process to the observed spikes results in a homogeneous Poisson process with rate C. 2.2 Goodness-of-fit tests for point processes Statistical tests are usually evaluated using two measures: The specificity (fraction of correct models that pass the test) and the sensitivity or test power (fraction of wrong models that are properly rejected by the test). The specificity is set by the significance level: With significance level ?, the specificity is 1 ? ?. The sensitivity of a given test depends on the strength of the departure from the modeled intensity function to the true intensity. 2.2.1 The time-rescaling theorem A popular way for verifying point-process-based models has been the time-rescaling theorem [3, 12]. It states that if {ui } is a realization of eventsR from a point process with conditional intensity ?(t|Ht ), u then rescaling via the transformation u0i = 0 i ?(t|Ht )dt will yield a unit-rate Poisson process. We call the following transformation the na??ve time-rescaling when it is applied to binary sequences. Pj The spike time ui falling into bin j, is transformed into: u0i = k=1 pk . 2.2.2 Thinning point processes It is well known that an inhomogeneous point process can be simulated by generating a homogeneous Poisson process with constant intensity C with C ? max ?(t) (the so-called dominant i) process) and keeping every spike at time ti with probability p = ?(t [13, 2]. In reverse, this C can be used to do model-checking [14]: Let B be a lower bound of the fitted conditional intensity ?(t|H(t)). Now take ?(t|H(t)) as the dominant process with samples ui . Thin the process by keeping a spike with probability ?(tiB|Ht ) . For a correctly specified model ?(t|Ht ), the thinned process will be a homogeneous Poisson process with rate B (Figure 2B). 3 ? (due to absolute refractoriness in most renewal process models Typically, B = min ?(t)  ?(t) and GLMs), such that the thinned process will have a prohibitively low rate and only very few spikes will be selected. Testing the Poisson hypothesis on a handful of spikes will result in a vanishingly low power. To circumvent this problem, we propose the following remedy: Let B ? be a threshold which may be higher than the lower bound B. Then consider only the intervals of ? for which ? > B ? and concatenate those into a new point process. After applying the thinning procedure on all spikes of the stitched process, the thinned process should be a Poisson process with rate B ? . This procedure can be repeated K times for a range of uniformly spaced B ? s ranging from B to C (upper bound). Stretching each thinned process by a factor of B ? creates a set of K unit-rate processes. Each of them is tested for the Poisson hypothesis by a Kolmogorov-Smirnov test on the inter-spike intervals. The model is rejected when there is at least one significant rejected null hypothesis. To correct for the multiple tests, we employ Simes? procedure. It tests the global null hypothesis that all tested sub-hypotheses are true against the alternative hypothesis that at least one hypothesis is false. To (1) (2) (K) this end, it transforms the ordered list of p-values p(1) , ..., p(K) into Kp1 , Kp2 , ..., KpK . If any of the transformed p-values is less than the significance level ? = .05, the model is rejected [15]1 . 2.2.3 Complementing point processes The idea of thinning might also be used the other way round. Assume the observations ui have been generated by thinning a homogeneous Poisson process with rate C using the modeled conditional intensity ?(t|Ht ) as the lower bound. Then we can define a complementary process ?c (t) = C ? ?(t|Ht ) such that adding spikes from the complementary point process to the observed spikes, the resulting process will be a homogeneous Poisson process with rate C. This algorithm is a straightforward inversion of the thinning algorithms discussed in [2, 1]. It might happen that the upper bound C of the modeled intensity is much larger than the average ?(t). In that case, the observed spike pattern would be distorted with high number of Poisson spikes from the complementary process and the test power would be low. To avoid this, a similar technique as for the thinning procedure can be employed. Define a threshold C ? ? C and consider only the region of the spike train for which ?(t|H(t)) < C ? . Apply the complementing procedure on these parts of the spike train to obtain a point process with rate C ? when concatenating the intervals. This process can be repeated K times with values C ? ranging from B to C. A multiple-test correction has to be used, again we propose Simes? method (see previous section). 2.3 Creating surrogate point processes from time series Since the time-rescaling theorem can only be used when ?(t|Ht ) the exact spike times {ui } are known, it is not a priori clear how it applies to discretized time-series models. For such cases, we propose to generate surrogate point process samples that are equivalent to the observed time series. To apply the time-rescaling theorem on discretized models such as GLMs, the integral of the time transformation is replaced by a discrete sum over bins (the na??ve time-rescaling). Taking the simplest example of a homogeneous Poisson process, it is evident that the possible values for the rescaled intervals form a finite set. This contradicts the time-rescaling theorem that states that the intervals are (continuously) exponentially distributed. Hence, using the time-rescaling theorem on discretized data produces a bias [17]. While Haslinger et al. considered a modification of the time-rescaling theorem to explicitly account for the discrete nature of the model [17], we propose a general, simple scheme how to form surrogate point processes from Poisson- and Bernoulli-GLMs that can be used for the continuous time-rescaling theorem as well as for any other goodness-of-fit test designed for point-process data (Figure 3). Poisson-GLMs: The observation consists of a sequence of count variables ci that is modeled as a sample from Poisson distributions with mean ?i . Hence, the modeled process can be regarded as a piecewise-constant intensity function. The expected number of spikes of a Poisson process is related to its intensity via ?i = ?i ? such that we can construct the conditional intensity function as 1 The K tests contain overlapping regions of the same spike train, hence, we expect the statistical tests to be correlated. In these cases, a simple Bonferroni-correction would be too conservative [16]. 4 binning - free model Bernoulli - GLM Poisson - GLM spike times {ui }; conditional intensity O (t | H (t )) binary observations {bi }; spiking probabilities {p i } spike counts {ci }; expected counts {Pi } Pi  ln(1  pi ) draw {ci } from PP(i poisson )(ci k | k t 1) draw spiketimes {u i } from Unif( 0 ,') inside each bin; set ?i ?i ? apply goodness - of - fit procedures based on point process {u i } and conditional intensity O (t | H(t)) Figure 3: Creating surrogate point processes from time series. For bin-free point process models for which the spike times and a conditional intensity ?(t|H(t)) is available, goodness-of-fit tests for point processes can be readily applied. For Poisson-GLMs, exact spike times are drawn inside each bin for the specified number of spikes that were observed. The piece-wise constant conditional intensity function is linked to the modeled number of counts per bin via ?i = ??1 ?i . For BernoulliGLMs, the probability of obtaining at least one spike per bin pi is modeled. For each bin with spikes (bi = 1) ? assuming a local Poisson process ? a sample ci from a biased Poisson distribution with mean ?i = ? ln(1 ? pi ) is drawn together with corresponding spike times. Finally, point-process based goodness-of-fit tests may be applied to this surrogate spike train. piece-wise constant with values ?i = ??1 ?i . Conditioned on the number of spikes that occurred in a homogeneous Poisson process of rate ?i , the exact spike times are uniformly distributed inside bin i. A surrogate point process can be constructed from a Poisson-GLM by generating random spike times (i ? 1 + U nif (0, 1))? for each spike within bin i (1 ? i ? N ) for all bins with ci > 0. One can then proceed to the point-process-based goodness-of-fit tools using the surrogate spike train and its conditional intensity ?i . Bernoulli-GLMs: Based on the observed binary spike train {bi }, the sequence of probabilities pi of spiking within bin i is modeled. We can relate this to the point process framework using the following observations: Assume that pi denotes the probability of finding at least one spike within (poisson) (X ? each bin2 and that locally, the process behaves like a Poisson process. Then, pi = P?i (poisson) ?1 (X = 0) = 1 ? exp(??i ). The conditional intensity is given by ?i = ? ?i = 1) = 1 ? P?i ???1 ln(1 ? pi ). In practice, for each bin with bi = 1, we draw the amount of spikes within the (poisson) (X = k|k ? 1) and sample exact spike times bin by first sampling from the distribution P?i uniformly as in the case of the Poisson-GLMs. 3 Results Here, we compare the performance of the three different approaches in detecting wrongly specified models, using examples of models that are commonly applied in neural data analysis. For the thinning and complementing procedure, K = 10 partitions were chosen (see section 2.2.2). Unless otherwise noted, we report the test power at a specificity of 1 ? ? = .95. The Poisson hypothesis in the proposed procedures is tested by a Kolmogorov-Smirnov test on the inter-spike intervals of the transformed process. 3.1 Example: Inhomogeneous Poisson process Consider an inhomogeneous Poisson process with band-limited intensity: ?(t|Ht ) = ?(t) = PJ=40 sin(2?f (t? Jj T )) 20 Hz + j=1 uj with f = 1 Hz and J = 40 coefficients that were randomly ?(t? j T ) J drawn from a uniform distribution on the interval [0, 20]. The process was simulated over a length of T = 20 s and the intensity was discretized with ? = 1 ms. Negative intensities were clipped 2 Such clipping is implicitly performed in many studies, e. g. in [18, 19, 20]. 5 100 50 1 1 0.8 0.8 0.6 complementing thinning rescaling naive rescaling 0.4 0.2 0 0 5 10 time [s] 15 20 (a) intensity function test power no jitter medium jitter high jitter test power intensity function 150 0 0 10 20 30 jitter strength 0.6 complementing thinning rescaling naive rescaling 0.4 0.2 40 0 0 0.5 1?specificity (b) test power 1 (c) ROC curve Figure 4: Inhomogeneous Poisson process. (A) Sample intensity functions for an undistorted intensity (black line) and two models with jitters in the coefficients (? = 12, medium jitter and ? = 30, large jitter). (B) The test power of each test as a function of the jitter strength. The dashed line indicates the level of the medium jitter strength (red line in figure A). (C) ROC curve analysis for an intermediate jitter strength of ? = 12. The intersection of the curves with the dashed line corresponds to the test power at a significance level of ? = .05. to zero. A binary spike train was generated by calculating the probability of at least one spike in each time bin as pi = 1 ? exp(??(ti )?) and drawing samples from a Bernoulli distribution with specified probabilities pi . For evaluating the different algorithms, wrong models for the intensity were created with jittered coefficients u0k = uk + ?Unif(?1, 1) where ? indicates the strength of the deviation from the true model. For each jitter strength, N = 1000 spike trains were generated from the true model and ?(t|Ht ) was constructed using the wrong model (Figure 4A). For any ? > 0, the fraction of rejected models defines the sensitivity or test power. For ? = 0, the fraction of accepted models defines the specificity which was controlled to be at 1 ? ? = .95 for each test. All three methods (rescaling, thinning, complementing) show a specified type-I error of approximately 5% (? = 0) and progressively detect the wrong models. Notably, the complementing and thinning procedures detect a departure from the correct model earlier than the classical rescaling (Figure 4B). For comparison, also the na??ve implementation of the rescaling transformation is shown. The significance level for the KS test used for the na??ve time-rescaling was adjusted to ? = .015 to achieve a 95% specificity. The adjustment was necessary due to the discretization bias (see section 2.3). For models with an intermediate jitter strength (? = 12), ROC curves were constructed. Here, for a given significance level ?, a pair of true and false positive rates can be calculated and plotted for each test (taking N = 1000 repetitions using the true model and the model with jittered coefficients). It can be seen that especially for intermediate jitter strengths, complementing and thinning outperform time-rescaling (Figure 4C), independent of the chosen significance level. 3.2 Example: Renewal process In a second example, we consider renewal processes, i. e. inter-spike intervals are an i. i. d. sample from a specific probability distribution p(?t). In this case, the conditional intensity is given by ? ) R p(t?t ?(t|Ht ) = where t? denotes the time of the last spike prior to time t. For this t?t? 1? 0 p(u)du example, we chose the Gamma distribution as it is commonly used to model real spike trains [4, 3, 7]. The spike train was generated from a true model, following a Gamma distribution with scale param? ?t A . Wrong models were eter A = 0.032 and shape parameter B = 6.25: p(?t) = (?t)B?1 AeB ?(B) generated by scaling the shape and scale parameter by a factor of 1 + ? (?jitter?) while keeping the expected value of the distribution constant (i. e. B 0 = (1 + ?)B, A0 = (1 + ?)?1 A) (Figure 5A). For each jitter strength, N = 1000 data sets of length T = 20 s were generated from the true model and the wrong model and the tests were applied. 6 10 1 1 0.8 0.8 0.6 rescaling thinning complementing naive rescaling 0.4 0.2 0 0 0.05 0.1 0.15 inter?spike interval [s] (a) intensity function 0.2 0 0 0.5 1 jitter strength (b) test power 1.5 test power 20 sample ISI distribution no jitter medium jitter high jitter test power probability density function 30 0.6 0.4 0.2 0 0 rescaling thinning complementing naive rescaling 0.5 1?specificity 1 (c) ROC curve Figure 5: Renewal process. (A) Inter-spike interval distributions for the undistorted (black line) and distorted models (medium jitter, ? = 0.5 and strong jitter, ? = 1.0). For comparison, a sample ISI histogram from one of the simulations is shown in gray. Note that the mean of the three distributions is matched to be the same (vertical dashed line). (B) The test power of each test as a function of the jitter strength. The dashed line indicates the level of the medium jitter strength (red line in figure A). (C) ROC curve analysis for an intermediate jitter strength of ? = 0.5. The intersection of the curves with the dashed line corresponds to the test power at a significance level of ? = .05. The analysis of test power for each test and the ROC curve analysis for an intermediate jitter strength reveal that time-rescaling is slightly superior to thinning and complementing (Figure 5B and C). The na??ve time-rescaling performs worst (adjusted significance level for the KS test, ? = .017). 3.3 Example: Inhomogeneous Spike Response Model We model an inhomogeneous spike response model with escape noise using a Bernoulli-GLM [21]. The spiking probability is modulated by an inhomogeneous rate r(t). Additionally, for each spike, a post-spike kernel is added to the process intensity. The rate function is modeled like in the first PJ=40 sin(2?f (ti ? Jj T )) with f = 1 Hz example as a band-limited function rti = r(ti ) = j=1 uj ?(ti ? Jj T ) and J = 40 coefficients that were randomly drawn from a uniform distribution on the interval [?0.2, 0.2]. The post-spike kernel ?(?t) is modeled as a sum of three exponential functions (? = 5 ms, 25 ms and 1 s) with appropriate amplitudes as to mimick a relative refractory period, a small rebound and a slow (inhibitory) adaptation. To construct the Bernoulli-GLM, P the spiking probability 1 pi per bin of length ? = 1 ms is pi = 1+exp(?s + with s = ?3 + r i t i {uj }<ti ?(uj ? ti ). i) A binary time series (the spike train) was generated for a duration of T = 20 s. The jittered models were constructed by adding a jitter ? on the coefficients of the inhomogeneous rate modulation (Figure 6A). For each jitter strength, N = 1000 data sets were generated from the true model and the wrong model and the tests were applied. Both thinning and complementing are able to detect smaller distortions than both the time-rescaling on the surrogate and discrete data (Figure 6B, adjusted significance level for the na??ve rescaling, ? = .018). A ROC curve analysis for an intermediate jitter strength (? = 0.4) supports this finding (Figure 6C). 4 Discussion Assessing goodness-of-fit for Generalized Linear Models has mostly been done by applying the time-rescaling transformation that is defined for point processes, assuming a match between those approaches. When the per-bin probability of spiking cannot be regarded as low, this approximation breaks down and creates a bias when applying the time-rescaling transformation [17]. In a first step, we proposed a procedure to create surrogate point processes from discretized models, such as Bernoulli- and Poisson-GLMs, that do not exhibit this bias. Throughout all the examples, the timerescaling theorem applied to the surrogate point process was systematically better than applying the na??ve time-rescaling on the discrete data. Since only the adjusted time-rescaling procedure allows 7 60 40 20 0 7 1 1 0.8 0.8 0.6 thinning complementing rescaling naive rescaling 0.4 0.2 7.5 8 time [s] 8.5 (a) intensity function 9 test power no jitter medium jitter high jitter test power intensity function 80 0 0 0.5 jitter strength (b) test power 0.6 0.4 0.2 1 0 0 thinning complementing rescaling naive rescaling 0.5 1?specificity 1 (c) ROC curve Figure 6: Inhomogeneous Spike Response Model. (A) Sample intensity functions for an undistorted intensity (black line) and two misspecified models (medium jitter, ? = 0.4 and strong jitter, ? = 1.0). (B) The test power of each test as a function of the jitter strength. The dashed line indicates the level of the medium jitter strength (red line in figure A). (C) ROC curve analysis for an intermediate jitter strength of ? = 0.4. The intersection of the curves with the dashed line corresponds to the test power at a significance level of ? = .05. to reliably control the specificity of the test, it should be preferred over the classical time-rescaling in all cases where discretized models are used. We have presented two alternatives to an application of the time-rescaling theorem: For the first procedure, the observed spike train is thinned according to the value of the conditional intensity at the time of spikes. The resulting process is then a homogeneous Poisson process with a rate that is equal to the lower bound on the conditional intensity. The second proposed method builds on the idea that an intensity function ?(t) with an upper bound C can be filled up to a homogeneous Poisson process of rate C by adding spike samples from the complementary process C ? ?(t). The proposed tests work best if the lower and upper bounds are tight. However, in most practical cases, especially the lower bound will be prohibitively low to apply any statistical test on the thinned process. As a remedy, we proposed to consider only regions of ?(t|H(t)) for which the intensity exceeds a given threshold and repeat the thinning for different thresholds. This successfully overcomes the limitation that may have ? up to now ? prevented the use of the thinning algorithm as a goodness-of-fit measure for neural models. The three tests are complementary in the sense that they are sensitive to different deviations of the modeled and true intensity function. Time-rescaling is only sensitive to the total integral of the intensity function between spikes, while thinning exclusively considers the intensity function at the time of spikes and is insensitive to its value at places where no spikes occurred. Complementing is sensitive to the exact shape of ?(t) regardless of where the spikes from the original observations are. For the examples of an inhomogeneous Poisson process and the Spike Response Model, thinning and complementing outperform the sensitivity of the simple time-rescaling procedure. They can detect deviations from the model that are only half as large as the ones necessary to alert the test based on time-rescaling. For modeling renewal processes, time-rescaling was slightly advantageous compared to the to other methods. This should not come as a surprise since the time-rescaling test is known to be sensitive to modeling the distribution of inter-spike intervals [3]. Beside from likelihood criteria [12, 22, 23], there exist few goodness-of-fit tools for neural models based on Generalized Linear Models [2, 24]. With the proposed procedure for surrogate point processes, we bridge the gap between such discrete models and point processes. That enables to make use of additional tests from this domain, such as thinning and complementing procedures. We expect these to be valuable contributions to the general practice of statistical evaluation in modeling single neurons as well as neural populations. Acknowledgments Felipe Gerhard thanks Gordon Pipa and Robert Haslinger for helpful discussions. Felipe Gerhard is supported by the Swiss National Science Foundation (SNSF) under the grant number 200020117975. 8 References [1] Daley, D. J., & Vere-Jones, D. (2002). An Introduction to the Theory of Point Processes, Volume 1 (2nd ed.). New York: Springer. [2] Ogata, Y. (1981). On Lewis? simulation method for point processes. IEEE Transactions on Information Theory, 27(1). [3] Brown, E. N., Barbieri, R., Ventura, V., Kass, R. E., & Frank, L. M. (2002). The time-rescaling theorem and its application to neural spike train data analysis. Neural Computation, 14(2), 325?346. [4] Barbieri, R., Quirk, M. C., Frank, L. M., Wilson, M. A., & Brown, E. N. (2001). Construction and analysis of non-Poisson stimulus-response models of neural spiking activity. Journal of Neuroscience Methods, 105(1), 25?37. [5] Koyama, S., & Kass, R. E. (2008). Spike train probability models for stimulus-driven leaky integrateand-fire neurons. Neural computation, 20(7), 1776?1795. [6] Rigat, F., de Gunst, M., & van Pelt, J. (2006). Bayesian modelling and analysis of spatio-temporal neuronal networks. Bayesian Analysis, 1(4), 733?764. [7] Shimokawa, T., & Shinomoto, S. (2009). Estimating instantaneous irregularity of neuronal firing. Neural Computation, 21(7), 1931?1951. [8] Wojcik, D. K., Mochol, G., Jakuczan, W., Wypych, M., & Waleszczyk, W. (2009). Direct estimation of inhomogeneous Markov interval models of spike trains. Neural Computation, 21(8), 2105?2113. [9] Truccolo, W., Eden, U. T., Fellows, M. R., Donoghue, J. P., & Brown, E. N. (2005). A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J Neurophysiol, 93(2), 1074?1089. [10] Pawitan, Y. (2001). In all likelihood: statistical modelling and inference using likelihood. Oxford: Oxford University Press. [11] Doya, K., Ishii, S., Pouget, A., & Rao, R. P. N. (2007). Bayesian brain: Probabilistic approaches to neural coding. Cambridge, MA: MIT Press. [12] Pillow, J. W. (2009). Time-rescaling methods for the estimation and assessment of non-Poisson neural encoding models. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, & A. Culotta (Eds.) Advances in Neural Information Processing Systems 22, (pp. 1473?1481). Cambridge, MA: MIT Press. [13] Lewis, P. A. W., & Shedler, G. S. (1979). Simulation of nonhomogeneous Poisson processes by thinning. Nav. Res. Logist. Q., 26, 403?413. [14] Schoenberg, F. P. (2003). Multidimensional residual analysis of point process models for earthquake occurrences. Journal of the American Statistical Association, 98(464), 789?795. [15] Simes, R. J. (1986). An improved Bonferroni procedure for multiple tests of significance. Biometrika, 73(3), 751?754. [16] Rodland, E. A. (2006). Simes? procedure is ?valid on average?. Biometrika, 93(3), 742?746. [17] Haslinger, R., Pipa, G., & Brown, E. (2010). Discrete time rescaling theorem: Determining goodness of fit for discrete time statistical models of neural spiking. Neural Computation, 22(10), 2477?2506. [18] Schneidman, E., Berry, M. J., Segev, R., & Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087), 1007?1012. [19] Pillow, J. W., Shlens, J., Paninski, L., Sher, A., Litke, A. M., Chichilnisky, E. J., & Simoncelli, E. P. (2008). Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207), 995?999. [20] Tang, A., Jackson, D., Hobbs, J., Chen, W., Smith, J. L., Patel, H., Prieto, A., Petrusca, D., Grivich, M. I., Sher, A., Hottowy, P., Dabrowski, W., Litke, A. M., & Beggs, J. M. (2008). A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. Journal of Neuroscience, 28(2), 505?518. [21] Gerstner, W., & Kistler, W. M. (2002). Spiking Neuron Models. Cambridge: Cambridge University Press. [22] Wood, F., Roth, S., & Black, M. (2006). Modeling neural population spiking activity with Gibbs distributions. In Y. Weiss, B. Sch?olkopf, & J. Platt (Eds.) Advances in Neural Information Processing Systems 18, (pp. 1537?1544). Cambridge, MA: MIT Press. [23] Pillow, J., Berkes, P., & Wood, F. (2009). Characterizing neural dependencies with copula models. In D. Koller, D. Schuurmans, Y. Bengio, & L. Bottou (Eds.) Advances in Neural Information Processing Systems 21, (pp. 129?136). Cambridge, MA: MIT Press. [24] Brown, E. N., Barbieri, R., Eden, U. T., & Frank, L. M. (2003). Likelihood methods for neural spike train data analysis. In J. Feng (Ed.) Computational Neuroscience: A comprehensive approach, (pp. 253?286). London: Chapman and Hall. 9
3967 |@word trial:1 version:1 inversion:1 advantageous:1 smirnov:2 nd:1 haslinger:3 unif:2 mimick:1 simulation:3 pipa:2 series:16 exclusively:1 ecole:2 denoting:1 current:1 discretization:2 ka:2 vere:1 readily:1 concatenate:1 happen:1 partition:1 shape:3 enables:1 designed:1 progressively:1 half:1 selected:1 complementing:21 signalling:1 smith:1 coarse:1 detecting:1 mathematical:1 alert:1 constructed:5 direct:1 become:1 consists:2 fitting:2 inside:5 thinned:8 pairwise:1 inter:6 notably:1 expected:5 isi:2 brain:3 discretized:9 param:1 becomes:1 estimating:1 notation:1 moreover:1 matched:1 medium:9 null:2 nav:1 finding:3 transformation:6 temporal:3 fellow:1 every:1 multidimensional:1 ti:9 prohibitively:2 wrong:7 biometrika:2 uk:1 control:1 unit:2 grant:1 platt:1 positive:1 timing:1 local:2 limit:2 encoding:1 oxford:2 barbieri:3 firing:2 path:1 approximately:2 modulation:1 might:2 black:4 chose:1 k:2 suggests:1 lausanne:4 limited:2 bi:6 range:1 practical:1 acknowledgment:1 earthquake:1 testing:1 practice:3 swiss:1 irregularity:1 procedure:19 specificity:10 cannot:1 wrongly:1 applying:4 equivalent:4 roth:1 straightforward:1 regardless:1 williams:1 duration:1 pouget:1 hottowy:1 regarded:2 shlens:1 jackson:1 population:4 traditionally:1 schoenberg:1 transmit:1 construction:1 gerhard:4 exact:6 homogeneous:9 hypothesis:8 breakdown:1 binning:2 observed:7 tib:1 verifying:1 worst:1 region:3 culotta:1 rescaled:3 valuable:1 ui:8 covariates:3 shedler:1 tight:1 creates:2 neurophysiol:1 translated:1 represented:2 kolmogorov:2 train:24 london:1 outcome:1 larger:1 distortion:1 drawing:1 otherwise:1 sequence:10 propose:4 vanishingly:1 adaptation:1 realization:1 erale:2 achieve:1 olkopf:1 felipe:4 assessing:1 produce:1 generating:2 tk:2 undistorted:3 quirk:1 strong:2 come:1 switzerland:2 inhomogeneous:12 correct:3 stochastic:4 kistler:1 bin:26 truccolo:1 integrateand:1 adjusted:4 correction:2 hold:1 considered:2 hall:1 exp:3 estimation:2 sensitive:4 bridge:1 repetition:1 create:2 successfully:1 tool:2 mit:4 snsf:1 ck:1 avoid:1 wilson:1 properly:1 bernoulli:12 likelihood:10 indicates:4 modelling:2 hk:2 ishii:1 litke:2 detect:4 kp2:1 sense:1 helpful:1 inference:1 dependent:1 epfl:4 typically:1 a0:1 relation:1 koller:1 transformed:3 augment:1 priori:1 spatial:1 renewal:6 copula:1 equal:1 construct:2 u0i:2 having:1 sampling:1 petrusca:1 chapman:1 hobbs:1 jones:1 thin:1 rebound:1 report:1 stimulus:2 piecewise:1 escape:1 few:2 employ:1 gordon:1 randomly:2 kp1:1 gamma:2 ve:7 national:1 comprehensive:1 replaced:1 fire:1 interest:1 possibility:1 evaluation:1 stitched:1 integral:3 edge:1 necessary:4 unless:1 filled:1 re:1 plotted:1 fitted:1 modeling:5 earlier:1 rao:1 cover:1 goodness:16 clipping:1 deviation:3 uniform:2 too:1 characterize:1 spiketrain:6 dependency:1 jittered:3 thanks:1 density:1 sensitivity:4 probabilistic:1 together:1 continuously:1 na:7 again:1 possibly:1 creating:2 american:1 rescaling:53 account:1 potential:5 de:3 coding:1 includes:1 coefficient:6 matter:1 explicitly:1 depends:1 piece:4 performed:1 view:1 break:1 linked:4 red:3 contribution:1 ass:1 oi:2 stretching:1 ensemble:1 yield:1 spaced:1 weak:1 bayesian:3 beggs:1 researcher:1 history:4 kpk:1 ed:7 against:1 pp:5 popular:3 ask:1 lim:1 amplitude:2 thinning:28 higher:1 dt:3 htk:1 response:7 improved:1 wei:1 evaluated:1 refractoriness:1 done:1 strongly:1 furthermore:1 rejected:5 nif:1 glms:13 correlation:3 overlapping:1 assessment:1 defines:2 shimokawa:1 reveal:1 gray:1 effect:1 contain:1 true:10 remedy:2 brown:5 hence:5 conditionally:1 round:1 sin:2 shinomoto:1 width:1 bonferroni:2 noted:1 m:4 generalized:5 criterion:1 evident:1 complete:1 polytechnique:2 performs:1 meaning:1 wise:3 ranging:2 novel:1 instantaneous:1 misspecified:1 common:1 superior:1 behaves:2 spiking:12 vitro:1 overview:1 refractory:1 exponentially:1 insensitive:1 volume:1 discussed:1 occurred:2 association:1 relating:1 measurement:1 significant:1 cambridge:6 gibbs:1 berkes:1 dominant:2 apart:1 reverse:1 driven:1 logist:1 binary:8 success:1 captured:1 seen:2 additional:2 employed:1 maximize:1 period:1 schneidman:1 dashed:7 multiple:3 simoncelli:1 exceeds:1 match:1 characterized:1 rti:1 divided:1 post:2 prevented:1 controlled:1 regression:1 poisson:47 histogram:1 kernel:2 eter:1 interval:14 singular:1 sch:1 biased:1 hz:3 lafferty:1 adequacy:1 call:1 presence:1 intermediate:7 bengio:2 enough:1 fit:16 idea:3 donoghue:1 whether:1 proceed:1 york:1 jj:3 action:4 clear:1 transforms:1 amount:1 locally:2 band:2 simplest:1 generate:1 outperform:2 exist:1 inhibitory:1 neuroscience:3 extrinsic:1 per:6 correctly:1 discrete:10 threshold:4 eden:2 falling:1 drawn:4 pj:3 ht:18 fraction:4 sum:4 wood:2 jitter:37 distorted:2 clipped:1 throughout:1 place:1 doya:1 draw:3 scaling:1 bound:9 followed:2 activity:5 binned:1 strength:21 handful:1 segev:1 aeb:1 min:1 pelt:1 according:2 membrane:1 smaller:1 slightly:2 increasingly:1 contradicts:1 modification:1 glm:5 ln:3 equation:1 count:9 mind:2 instrument:1 end:1 grivich:1 available:3 operation:1 apply:4 appropriate:1 occurrence:2 alternative:2 original:5 denotes:2 calculating:1 uj:4 especially:2 build:1 classical:2 feng:1 question:1 added:1 spike:81 parametric:1 surrogate:13 bialek:1 exhibit:1 link:1 simulated:2 prieto:1 koyama:1 considers:1 assuming:4 length:4 modeled:14 equivalently:1 mostly:2 ventura:1 robert:1 relate:2 frank:3 trace:1 negative:1 neuroscientific:1 implementation:1 reliably:1 upper:5 vertical:1 observation:9 neuron:6 markov:1 finite:1 community:1 intensity:45 introduced:2 bk:1 pair:1 specified:5 chichilnisky:1 connection:1 able:2 usually:1 pattern:1 departure:2 built:1 max:1 power:21 event:4 circumvent:1 residual:1 scheme:1 imply:1 created:1 naive:6 sher:2 review:1 prior:1 berry:1 checking:2 determining:1 relative:1 beside:1 expect:2 limitation:1 foundation:1 conveyed:1 sufficient:1 systematically:1 pi:19 repeat:1 last:1 free:3 keeping:4 supported:1 bias:4 institute:2 taking:2 characterizing:1 absolute:1 leaky:1 distributed:3 van:1 curve:12 calculated:1 cortical:1 evaluating:3 transition:1 pillow:3 valid:1 commonly:2 transaction:1 patel:1 implicitly:1 preferred:1 overcomes:1 global:1 assumed:2 spatio:2 alternatively:1 continuous:2 dabrowski:1 additionally:1 nature:3 transfer:1 nonhomogeneous:1 obtaining:1 schuurmans:2 du:1 gerstner:3 bottou:1 constructing:1 domain:1 pk:2 significance:12 stereotyped:1 noise:1 nothing:1 repeated:2 complementary:8 neuronal:3 roc:9 slow:1 sub:1 daley:1 concatenating:1 exponential:1 tang:1 ogata:1 theorem:17 down:1 specific:1 covariate:1 list:2 timerescaling:1 false:2 adding:4 hui:1 ci:8 conditioned:1 gap:1 chen:1 surprise:1 entropy:1 intersection:3 paninski:1 visual:1 ordered:1 adjustment:1 applies:1 springer:1 ch:2 corresponds:4 lewis:2 complemented:1 ma:4 conditional:20 wulfram:2 specifically:1 uniformly:3 conservative:1 called:1 total:1 pas:1 accepted:1 indicating:1 support:2 maximumlikelihood:1 modulated:1 tested:3 correlated:2
3,276
3,968
A Discriminative Latent Model of Image Region and Object Tag Correspondence Yang Wang? Department of Computer Science University of Illinois at Urbana-Champaign [email protected] Greg Mori School of Computing Science Simon Fraser University [email protected] Abstract We propose a discriminative latent model for annotating images with unaligned object-level textual annotations. Instead of using the bag-of-words image representation currently popular in the computer vision community, our model explicitly captures more intricate relationships underlying visual and textual information. In particular, we model the mapping that translates image regions to annotations. This mapping allows us to relate image regions to their corresponding annotation terms. We also model the overall scene label as latent information. This allows us to cluster test images. Our training data consist of images and their associated annotations. But we do not have access to the ground-truth regionto-annotation mapping or the overall scene label. We develop a novel variant of the latent SVM framework to model them as latent variables. Our experimental results demonstrate the effectiveness of the proposed model compared with other baseline methods. 1 Introduction Image understanding is a central problem in computer vision that has been extensively studied in the forms of various types of tasks. Some previous work focuses on classifying an image with a single label [6]. Others go beyond single labels and assign a list of annotations to an image [1, 10, 21]. Recently, efforts have been made to combine various tasks (i.e. classification, annotation, segmentation, etc) together to achieve a more complete understanding of an image [11, 12]. In this paper, we consider the problem of image understanding with unaligned textual annotations. In particular, we focus on the scenario where the annotations represent the names of the objects present in an image. The input to our learning algorithm is a set of images with unaligned textual annotations (object names). Our goal is to learn a model to predict the annotation (i.e. object names) for a new image. As a by-product, our model also roughly localizes the image regions corresponding to the annotation, see Fig. 1. The main contribution of this paper is the development of a model that incorporates this object annotation to image region correspondence in a discriminative framework. In the computer vision literature, there has been a lot of work on exploiting images and their associated textual information. Barnard et al. [1] predict words associated with whole images or regions by learning a joint distribution of image regions and words. Berg et al. [3] learn to name faces appearing in news pictures by learning a probabilistic model of face appearances, names, and textual contexts. Wang et al. [21] use a learned bag-of-words topic model to simultaneously classify and annotate images. Loeff et al. [13] discover scenes by exploiting the correlation between images and their annotations. Some recent work towards total scene understanding [11, 12] tries to build sophisticated generative models that jointly perform several tasks, e.g. scene classification, object recognition, image annotation, and image segmentation. ? Work done while the author was with Simon Fraser University. 1 mallet athlete horse ground tree (a) (b) (c) Figure 1: Our goal is to learn a model using images and their associated unaligned textual object annotations (a) as the training data. Given a new image (b), we can use the model to predict its textual annotations and roughly localize image regions corresponding to each of the annotation terms (c). Most of the previous work uses fairly crude ?bag-of-words? models, treating image features (extracted from either segmented regions or local interest points) and textual annotations as unordered entities and looking at their co-occurrence statistics. Very little work explicitly models more detailed relationships between image regions and annotations that are obvious to humans. For example, if an image is over-segmented into a large number of segments, each segment typically only corresponds to at most one object. However, most of the previous work ignores this constraint and allows an image region being used as evidence to explain different objects mentioned by the annotations. In this paper, we present a discriminative latent model that captures image regions, textual annotations, mappings between visual and textual information, and overall scene labels in a more explicit manner. Some work [1, 3] tries to incorporate the mapping information into a generative model. However due to the limitation of the machine learning tools used in those work, they did not properly enforce the aforementioned constraint on how image regions are mapped to annotations. There is also work [2] on augmenting training data with this mapping information, but it is unclear how it can be generalized on test data. With the recent advancement in learning with complex structured data [7, 18, 21, 25], we believe now it is the time for us to revisit this line of ideas and examine other modeling tools. The work by Socher et al. [17] is the most relevant to ours. In that work, they learn to annotate and segment images by mapping image regions and textual words to a latent meaning space using context and adjective features. There are important distinctions between our work and [17]. First of all, the input to [17] is a set of images (a handful of which are manually labeled) of a single sport category, and a collection of news articles for that sport. The news articles are generic for that sport, and the images are not the news photographs directly associated with those news articles. Although they have experimented on applying their model on image collections with mixed sport categories, their method seems to work better with single sport category training. In contrast, the input to our learning problem is a set of images from several sport categories, together with their associated textual annotations. We treat the sport category as a latent variable (we call it the scene label) and implicitly infer it during learning. 2 Model We propose a discriminative latent model that jointly captures the relationships between image segments, textual annotations, region-text correspondence, and overall image visual scene labels. Of course, only the image segments and textual annotations are observed on training data. All the other information (e.g. scene labels, the mapping between regions and annotations) are treated as latent variables in the model. A graphical illustration of our model is shown in Fig. 2. The input to our learning module is a set of hx, yi pairs where x denotes an image, and y denotes the annotation associated with this image. We partition the image into R regions using the segmentation algorithm in [8], i.e. x = [x1 , x2 , ..., xR ]. For each image region xi , we extract four types of visual features (see [14]): shape, texture, color, and location. Each of these feature types is vector quantized to obtain codewords for this feature type. Following [17], we use 20, 25, 40, 8 codewords for each of the four feature types, respectively. In the end, each region xi is represented as a 4-dimensional vector xi = (xi1 , xi2 , xi3 , xi4 ), where each xic is the corresponding codeword of the c-th feature type for this region. The annotation y of an image is represented as a binary vector y = (y1 , y2 , ..., yV ), where V is the total number of possible annotation terms. As a terminological convention, we use ?annotation? to denote the vector y and ?annotation term? to denote each component yj of the vector. An annotation 2 image regions dog athlete 1 car horse 1 ...... 0 0 annotation Figure 2: Graphical illustration of our model. An input image is segmented into several regions. The annotation of the image is represented as a 0-1 vector indicating the presence/absence of each possible annotation term. Our model captures the unobserved mapping that translate image regions to annotation terms associated with the image (e.g. horse, athlete). For annotation terms not associated with the image (e.g. car, dog), there are no mapped image regions. Our model also captures relationship between the unobserved scene label (e.g. polo) and image regions/annotations. term yj is ?active? (yj = 1) if it is associated with this image, and is ?inactive? (yj = 0) otherwise. We further assume the number of regions of an image is larger than or equal to the number of active PV annotation terms for an image, i.e. R ? j=1 yj . In this work, we assume there are no visually irrelevant annotation terms (e.g. ?wind?), and there are no annotation terms (e.g. ?people? and ?athlete?) of an image that refer to the same concept. These can be achieved by pre-processing the annotation terms with Wordnet (see [17]). Given an image x and its annotation y, we assume there is an underlying unobserved many-to-one mapping which translates R image regions to each of the active annotation terms. We restrict the mapping to have the following conditions: (i) each image region is mapped to at most one annotation term. This condition will ensure that an image region is not used to explain two different annotations; (ii) an active annotation term has one or more image regions mapped to it. This condition will make sure that if an annotation term (say ?building?) is assigned to an image, there is at least one image region supporting this annotation term; (iii) an inactive annotation term has no image regions mapped to it. This condition will guarantee there are no image regions supporting an inactive annotation term. More formally, we introduce a matrix z = {zij : 1 ? i ? R, 1 ? j ? V } defined in the following to represent this mapping for an image with R regions:  1 if the i-th image region is mapped to the j-th annotation term (1) zij = 0 otherwise We use Y to denote the domain of all possible assignments of y. For a fixed annotation y, we use Z(y) to denote the set of all possible many-to-one mappings that satisfy the conditions (i,ii,iii). It is easy to verify that any z ? Z(y) can be represented using the following three sets of constraints: X zij ? 1, ?i; max zij = yj , ?j; zij ? {0, 1}, ?i, ?j (2) j i For a given image, we also assume a discrete unobserved ?scene? label s which takes its value between 1 and S. We introduce the scene label to capture the fact that the annotations of images are typically well clustered according to their underlying scenes. For example, an image of a ?sailing? scene tends to have annotation terms like ?athlete?, ?sailboat?, ?water?, etc. However, it is not quite simple to define the vocabulary to label scenes [13]. In our work, we treat the scene label as a latent variable (hence we do not need its ground-truth label or even a vocabulary for defining it) and let the learning algorithm automatically figure out what constitutes a scene. As we will demonstrate in the experiment, the ?scenes? learned by our model on a collection of sport images do match our intuitions, e.g. they roughly correspond to different sport categories in the data. Inspired by the latent SVM [7, 25], we measure the compatibility between an image x and an annotation y using the following scoring function: f? (x, y) = max max ?? ? ?(x, y, z, s) s?S z?Z(y) (3) where ? are the model parameters and ?(x, y, z, s) is a feature vector defined on x, y, z and s. The model parameters have three parts ? = {?, ?, ?}, and ?? ? ?(x, y, z, s) is defined as: ?? ? ?(x, y, z, s) = ?? ?(x, z) + ? ? ?(x, s) + ? ? ?(y, s) 3 (4) The details of each of the terms in (4) are described in the following. Region-Annotation Matching Potential ?? ?(x, z): This potential function measures the compatibility of mapping image regions to their corresponding annotation terms. Recall an image region xi consists of codewords from four different feature types xi = (xi1 , xi2 , xi3 , xi4 ). Let Nc (c = 1, 2, 3, 4) denotes the number of codewords of feature type c. The parameters ? consist of four components ? = {?c }4c=1 corresponding to each of the four feature types. Each ?c is a matrix of size Nc ? V , where an entry ?cw,j can be interpreted as the compatibility between the codeword w (1 ? w ? Nc ) of feature type c and the annotation term j (1 ? j ? V ). The potential function is written as: ?? ?(x, z) = 4 X R X V X ?cxic ,j ? zij = Nc X 4 X R X V X ?cw,j ? 1(xic = w) ? zij (5) c=1 i=1 w=1 j=1 c=1 i=1 j=1 where 1(?) is the indicator function. Note that the definition of this potential function does not involve y since y is implicitly determined by z, i.e. yj = maxi zij . Image-Scene Potential ? ? ?(x, s): This potential function measures the compatibility between an image x and a scene label s. Similarly, the parameters ? consist of four parts ? = {? c }4c=1 correc sponding to the four feature types, where an entry ?w,s is the compatibility between the codeword w of type c and the scene label s. This potential function is written as: ? ? ?(x, s) = 4 X R X ?xc ic ,s = Nc X 4 X R X S X c ? 1(xic = w) ? 1(s = t) ?w,t (6) c=1 i=1 w=1 t=1 c=1 i=1 Annotation-Scene Potential ? ? ?(y, s): This potential function measures the compatibility between an annotation y and a scene label s. The parameters ? consist of S components ? = {? t }St=1 t corresponding to each of the scene label. Each component ? t is a V ? 2 matrix, where ?j,1 is the t compatibility of setting yj = 1 for the scene label t, and ?j,0 is the compatibility of setting yj = 0 for the scene label t. This potential function is written as: ? ? ?(y, s) = V X s = ?j,y j j=1 t=1 j=1 = S  V X X  t t ?j,0 ? 1(yj = 0) ? 1(s = t) + ?j,1 ? 1(yj = 1) ? 1(s = t) (7a) V X S  X j=1 t=1  t t ?j,0 ? (1 ? yj ) ? 1(s = t) + ?j,1 ? yj ? 1(s = t) (7b) The equivalence of (7a) and (7b) is due to 1(yj = 0) ? 1 ? yj and 1(yj = 1) ? yj for yj ? {0, 1}, which are easy to verify. 3 Inference Given the model parameters ? = {?, ?, ?}, the inference problem is to find the best annotation y? for a new image x, i.e. y? = arg maxy f? (x, y). The inference requires solving the following optimization problem: max f? (x, y) = max max max ?? ?(x, y, z, s) y?Y s?S y?Y z?Z(y) (8) Since we can enumerate all the possible values of the scene label s, the main difficulty of solving (8) is the inner maximization over y and z for a fixed s, i.e.: max max ?? ?(x, y, z, s) (9) y?Y z?Z(y) In the following, we develop a method for solving (9) based on linear program (LP) relaxation. To formulate the problem as an LP, we first define the following: aij = Nc 4 X X ?cw,j 1(xic = w), ?i, ?j c=1 w=1 4 s s bj = rj,1 ? rj,0 , ?j (10) Then it is easy to verify that the optimization problem in (9) can be equivalently written as (the constant in the objective not involving y or z is omitted): X X X zij ? 1, max zij = yj , zij ? {0, 1}, ?i ?j (11) bj yj s.t. aij zij + max y,z i j j i,j The optimization problem (11) is not convex. But we can relax its constraints to make it an LP. First we reformulate (11) as an integer linear program (ILP): X X X X zij , zij ? {0, 1}, yj ? {0, 1}, ?i ?j (12) zij ? 1, zij ? yj ? bj yj s.t. aij zij + max y,z i,j i j j It is easy to verify that (11) and (12) are equivalent. Of course, (12) still has the integral constraint zij ? {0, 1}, which makes the optimization problem NP-hard. So we further relax the value of zij to a real value in the range of [0, 1]. Putting everything together, the LP relaxation of (11) can be written as: X X X X zij , 0 ? zij ? 1, 0 ? yj ? 1, ?i ?j (13) zij ? 1, zij ? yj ? bj yj s.t. aij zij + max y,z i,j i j j After solving (13) with any LP solver, we round zij to the closest integer and obtain yj as yj = maxi zij . 4 Learning We now describe how to learn the model parameters ? from a set of N training examples hxn , yn i (n = 1, 2, ..., N ). Note that the training data only contain images and their annotations. We do not have the ground-truth scene label s or the mapping z for any of the training images, so we have to treat them as latent variables during learning. We adopt the latent SVM (LSVM) framework [7, 25] for learning. LSVMs extend the popular structural SVMs [18, 19] to handle latent variables during training. LSVMs and their variants have been successfully applied in several computer vision applications, e.g. object detection [7, 20], human action recognition [22, 16], human-object interaction [4], objects and attributes [23], human poses and actions [24], group activity recognition [9], etc. The latent SVM learns the model parameters ? by solving the following optimization problem: N X 1 min ||?||2 + C ?n ? 2 n=1 s.t. f? (xn , yn ) ? f? (xn , y) ? ?(y, yn ) ? ?n , ?n, ?y (14) where ?(y, yn ) is a loss function measuring the cost incurred by predicting y when the groundtruth annotation is yn . We use a simple Hamming loss which decomposes as ?(y, yn ) = PV n n n j=1 ?(yj , yj ), where ?(yj , yj ) is 1 if yj 6= yj and 0 otherwise. Note that our loss function only involves the annotation y, because this is the only ground-truth label we have access to. The problem in (14) can be equivalently written as an unconstrained problem: N   X 1 min ||?||2 + C (Ln ? Rn ), where Ln = max ?(y, yn ) + f? (xn , y) , Rn = f? (xn , yn ) (15) y ? 2 n=1 We use the non-convex bundle optimization in [5] to solve (15). In a nutshell, the algorithm iteratively builds an increasingly accurate piecewise quadratic approximation to the objective function. During each iteration, a new linear cutting plane is found via a subgradient of the objective function and added to the piecewise quadratic approximation. The key of applying this algorithm to solve (15) is computing the two subgradients ?? Ln and ?? Rn for a particular ?, which we describe in detail below. First we describe how to compute ?? L. Let (y? , z? , s? ) be the solution to the following optimization problem (called loss-augmented inference in the structural SVM literature): max max max ?(y, yn ) + f? (xn , y) s y z?Z(y) 5 (16) Then it is easy to show that a subgradient ?? Ln can be calculated as ?? Ln = ?(xn , y? , z? , s? ). The loss-augmented inference problem in (16) is similar to the inference problem in (8), except for an additional term ?(y, yn ). We can modify the LP relaxation method in Sec. 3 to solve (16) for a fixed s (and enumerate s to get the final solution). First of all, it is easy to verify that ?(yj , yjn ) can be re-formulated as:  1 ? yj if yjn = 1 n ?(yj , yj ) ? (17) yj if yjn = 0 Using (17), it is easy to show that if we re-define bj as below, the ILP in (12) will solve the lossaugmented inference (16) for a fixed s:  s s ?j,1 ? ?j,0 ? 1 if yjn = 1 bj = (18) s s ?j,1 ? ?j,0 + 1 if yjn = 0 Similarly, we can relax the problem to an LP using the same method in Sec. 3. Now we describe how to compute ?? R. Let (z? , s? ) be the solution to the following optimization problem: maxs maxz?Z(yn ) f? (xn , yn ). Then it can be shown that a subgradient ?? Rn can be calculated as ?? Rn = ?(xn , yn , z? , s? ). For a fixed s, it is easy to show that the maximization over z can be solved by the following ILP: X X (19) zij = yjn , ?i; zij ? {0, 1}, ?i ?j aij zij , s.t. max z i,j j Similarly, we can solve (19) via LP relaxation by replacing the integral constraint zij ? {0, 1} with a linear constraint 0 ? zij ? 1. 5 Experiments We test our model on the UIUC sport dataset [11]. It contains images collected from eight sport classes: badminton, bocce, croquet, polo, rock climbing, rowing, sailing and snowboarding. Each image is annotated with a set of tags denoting the objects in it. We remove annotation terms occurring fewer than three times. We randomly choose half of the data as the test set. From the other half, we randomly select 50 images from each class to form the validation set. The remaining data are used as the training set. We feed the training images and associated annotations (but not the ground-truth sport category labels) to our learning algorithm and set the number of latent scene labels to be eight (i.e. the number of sport classes). We initialize the parameters of our model as follows. First we cluster the training images into eight cluster using the following method. For each training image, we construct a feature vector from the visual information of the image itself and the textual information of its annotation. The visual information is simply the concatenation of visual word counts from all the regions in the image (normalized between 0 and 1), i.e. the dimensionality of the visual feature is PC c=1 Nc . The textual information is the 0-1 vector of the annotation, i.e. the dimensionality is V . We then run k-means clustering based on the combined visual and textual features to cluster training images into eight clusters. We use the cluster membership of each training image as the initial guess of the scene label s (which we call pseudo-scene label). We then initialize the parameters ? by examining the co-occurrence counts of visual words and pseudo-scene labels on the training data. Similarly, we initialize the parameters ? by the co-occurrence counts of annotation terms and pseudo-scene labels. The parameters ? are initialized by the co-occurrence counts of visual words and annotation terms with the mapping constraints ignored. We compare our model with a baseline method which is a set of linear SVMs separately trained for predicting the 0/1 output of each annotation term based on the feature vector from the visual information. Following [21], we use the F-measure to measure the annotation performance. The comparison is shown in Table 1(a). Our model outperforms the baseline SVM method. We also list the published result of [22] in the table. However, it is important to remember that it is not directly comparable to other numbers in Table 1(a), since [22] uses different image features and different subsets of the dataset unspecified in the paper. We visualize some results on the test data in Fig. 5. The scene labels s produced by our model for the test images can be considered as a clustering of the scenes in those images. We can measure the quality of the scene clustering by comparing with 6 Figure 3: Visualization of ? parameters. Each plot corresponds to a scene label s, we show the weights of top s five components of ?j,1 of all j ? {1..V } (y-axis) and the corresponding annotation terms (x-axis). athlete ceiling floor grass rowboat sailboat sky sun tree water Figure 4: Visualization of the ?position? components of the ? parameters for some annotation terms. Bright areas correspond to high values. the ground-truth scene labels (i.e. sport categories) of the test images. For comparison, we consider three baseline algorithms. The first baseline algorithm is to run k-means clustering on the test data based on the visual features. However the comparison to this baseline algorithm is not completely fair, since the baseline does not exploit any information from the annotations on the training data. So we define other two baseline algorithms that use this extra information. For the second baseline algorithm (which we call pseudo-label+SVM), we run k-means clustering on both training and validation data. We use both visual features and textual features for the clustering. After running k-means clustering, we assign a pseudo-label to each image in the training or validation set by its cluster membership. Then we train a multi-class SVM based on the visual features of the training images and their pseudo-labels. The parameters of the SVM classifier are chosen by validating on the validation images (visual features only) with their pseudo-labels. For a test image, we use the trained SVM classifier to assign a pseudo-label based on the visual feature of this image. The predicted pseudo-labels of test images serve as a clustering of those images. For the third baseline algorithm (which we call pseudo-annotation+K-means), we first train separate SVM classifiers to predict the annotation from the visual feature, using the ground-truth annotations of the validation set to choose the free parameters in SVM classifiers. For a set of test images, we use the trained SVM classifiers to predict their associated annotations (which we call pseudoannotations). Then we run k-means to cluster those test images based on both visual features and textual features. The textual features are obtained from the pseudo-annotations. We use the normalized mutual information (NMI) [15] to quantitatively measure the clustering results. Let ? = {?1 , ?2 , ..., ?K } be a set of clusters, and D = {d1 , d2 , ..., dK } be the set of groundI(?;D) , where I(?) and H(?) are the truth categories. The NMI is defined as NMI(?, D) = [H(?)+H(D)]/2 mutual information and the entropy, respectively. The minimum of NMI is 0 if the cluster is random with respect to the ground-truth. Higher NMIs means better clustering results. The comparison is shown in Table 1(b). Our model outperforms other baseline methods on the scene clustering task. We can visualize some of the parameters to get insights about the learned model. For a particular s scene label s, the parameter ?j,1 measures the compatibility of setting the j-th annotation term s active for the scene label s. We sort the annotation terms according to ?j,1 . In Fig 3, we visualize the top five annotation terms for each of the eight possible values of s. Intuitively, these eight scene clusters obtained from our model seem to match well to the eight different sport categories of this dataset. We also visualize the ?position? (i.e. c = 4) components of the ? parameters (Fig. 4) for several annotation terms as follows. For a particular annotation term j, we find the most preferred ?position? visual word w? for this annotation term by w? = arg maxw ?4w,j . The cluster center of the visual word w? defines an 8 ? 8 position mask of image locations (see [14]), which is visualized in Fig. 4. We can see that the learned ? parameters make intuitive sense, e.g. ?water? is preferred at the bottom of the image, while ?sky? is preferred at the top of the image. 7 method our approach SVM [21] (a) F-measure 0.4552 0.4112 0.3500 method our approach pseudo-label + SVM pseudo-annotation + K-means K-means (b) NMI 0.5295 0.4134 0.3267 0.2227 Table 1: Comparison of image annotation (a) and scene clustering (b). The number of clusters is set to be eight for all methods. See the text for more descriptions. Figure 5: (Best viewed in color) Results of annotation and segmentation on the UIUC sport dataset. Different annotation terms are shown in different colors. Image regions mapped to an annotation term are overlayed with the color corresponding to that annotation term. 6 Conclusion We have presented a discriminatively trained latent model for capturing the relationships among image regions, textual annotations, and overall scenes. Our ultimate goal is to achieve total scene understanding from cheaply available Internet data. Although most previous work in scene understanding focuses on generative probabilistic models (e.g. [1, 3, 11, 12, 21]), this paper offers an alternative path towards this goal via a discriminative framework. We believe discriminative methods offer a complementary advantage over generative ones. Certain relationships (e.g. the mapping between images regions and annotation terms) are hard to model, hence largely ignored in the generative approaches. But those relationships are easy to incorporate in a max-margin discriminative approach like ours. In this work we have provided evidence that modeling these relationships can improve image annotation. Our work provides a general solution that can be broadly applied in other applications involving mapping relationships, e.g. Youtube videos with annotations, movie clips with captions, face detection with person names, etc. There are many open issues to address in future research: (1) extending our model to handle a richer set of annotation terms (nouns, verbs, adjectives, etc) by modifying the many-to-one correspondence assumption. (2) exploring the use of this model with noisier annotation data (e.g. raw Flickr or YouTube tags); (3) exploiting the linguistic structure of tags. 8 References [1] K. Barnard, P. Duygulu, D. Forsyth, N. de Freitas, D. M. Blei, and M. I. Jordan. Matching words and pictures. Journal of Machine Learning Research, 3:1107?1135, 2003. [2] K. Barnard and Q. Fan. Reducing correspondence ambibuity in loosely labeled training data. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007. [3] T. L. Berg, A. C. Berg, J. Edwards, and D. Forsyth. Who?s in the picture. In Advances in Neural Information Processing Systems, volume 17, pages 137?144. MIT Press, 2004. [4] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for static human-object interactions. In Workshop on Structured Models in Computer Vision, 2010. [5] T.-M.-T. Do and T. Artieres. Large margin training for hidden markov models with partially observed states. In International Conference on Machine Learning, 2009. [6] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2):303?338, 2010. [7] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009. [8] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graph-based image segmentation. International Journal of Computer Vision, 2004. [9] T. Lan, Y. Wang, W. Yang, and G. Mori. Beyond actions: Discriminative models for contextual group activities. In Advances in Neural Information Processing Systems. MIT Press, 2010. [10] J. Li and J. Z. Wang. Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9):1075?1088, September 2003. [11] L.-J. Li and L. Fei-Fei. What, where and who? classifying events by scene and object recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007. [12] L.-J. Li, R. Socher, and L. Fei-Fei. Towards total scene understanding: Classification, annotation and segmentation in an automatic framework. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2009. [13] N. Loeff and A. Farhadi. Scene discovery by matrix factorization. In European Conference on Computer Vision, 2008. [14] T. Malisiewicz and A. A. Efros. Recognition by association via learning per-exemplar distances. In IEEE Computer Society Conference on Computer Vision and Pattern Recongition, 2008. [15] C. D. Manning. Introduction to Information Retrieval. Cambridge University Press, 2008. [16] J. C. Niebles, C.-W. Chen, and L. Fei-Fei. Modeling temporal structure of decomposable motion segments for activity classification. In European Conference on Computer Vision, 2010. [17] R. Socher and L. Fei-Fei. Connecting modalities: Semi-supervised segmentation and annotation of images using unaligned text corpora. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010. [18] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Advances in Neural Information Processing Systems, volume 16. MIT Press, 2004. [19] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6:1453?1484, 2005. [20] A. Vedaldi and A. Zisserman. Structured output regression for detection with partial truncation. In Advances in Neural Information Processing Systems. MIT Press, 2009. [21] C. Wang, D. Blei, and L. Fei-Fei. Simultaneous image classification and annotation. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2009. [22] Y. Wang and G. Mori. Max-margin hidden conditional random fields for human action recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2009. [23] Y. Wang and G. Mori. A discriminative latent model of object classes and attributes. In European Conference on Computer Vision, 2010. [24] W. Yang, Y. Wang, and G. Mori. Recognizing human actions from still images with latent poses. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010. [25] C.-N. Yu and T. Joachims. Learning structural SVMs with latent variables. In International Conference on Machine Learning, 2009. 9
3968 |@word seems:1 everingham:1 open:1 d2:1 initial:1 contains:1 zij:31 denoting:1 ours:2 outperforms:2 freitas:1 comparing:1 contextual:1 written:6 partition:1 hofmann:1 shape:1 remove:1 treating:1 plot:1 grass:1 generative:5 fewer:1 advancement:1 half:2 guess:1 intelligence:2 plane:1 blei:2 provides:1 quantized:1 location:2 five:2 consists:1 combine:1 introduce:2 manner:1 mask:1 intricate:1 roughly:3 examine:1 uiuc:3 multi:1 lsvms:2 inspired:1 voc:1 automatically:1 little:1 farhadi:1 solver:1 provided:1 discover:1 underlying:3 what:2 interpreted:1 unspecified:1 unobserved:4 guarantee:1 pseudo:13 remember:1 sky:2 temporal:1 nutshell:1 bocce:1 classifier:5 ramanan:2 yn:13 local:1 treat:3 tends:1 modify:1 path:1 studied:1 equivalence:1 co:4 factorization:1 range:1 snowboarding:1 malisiewicz:1 yj:39 xr:1 area:1 vedaldi:1 matching:2 word:12 pre:1 altun:1 get:2 tsochantaridis:1 context:2 applying:2 equivalent:1 maxz:1 center:1 go:1 williams:1 convex:2 formulate:1 decomposable:1 insight:1 badminton:1 handle:2 caption:1 us:2 recognition:13 labeled:2 artieres:1 observed:2 bottom:1 module:1 huttenlocher:1 taskar:1 wang:8 capture:6 solved:1 region:42 news:5 sun:1 desai:1 mentioned:1 intuition:1 correc:1 trained:5 solving:5 segment:6 serve:1 completely:1 joint:1 various:2 represented:4 train:2 describe:4 horse:3 quite:1 richer:1 larger:1 solve:5 say:1 relax:3 annotating:1 otherwise:3 statistic:1 jointly:2 itself:1 final:1 advantage:1 rock:1 propose:2 unaligned:5 product:1 interaction:2 relevant:1 translate:1 achieve:2 intuitive:1 description:1 exploiting:3 cluster:13 extending:1 object:19 develop:2 pose:2 augmenting:1 exemplar:1 school:1 edward:1 c:1 involves:1 predicted:1 convention:1 annotated:1 attribute:2 modifying:1 human:7 mcallester:1 everything:1 hx:1 assign:3 clustered:1 niebles:1 exploring:1 considered:1 ground:9 ic:1 visually:1 mapping:18 predict:5 bj:6 visualize:4 efros:1 adopt:1 omitted:1 bag:3 label:41 currently:1 successfully:1 tool:2 mit:4 linguistic:2 focus:3 joachim:2 properly:1 contrast:1 baseline:11 sense:1 inference:7 membership:2 typically:2 hidden:2 koller:1 compatibility:9 overall:5 classification:5 aforementioned:1 arg:2 among:1 issue:1 pascal:1 development:1 noun:1 fairly:1 initialize:3 mutual:2 equal:1 construct:1 field:1 manually:1 yu:1 constitutes:1 future:1 others:1 np:1 piecewise:2 quantitatively:1 randomly:2 simultaneously:1 overlayed:1 detection:4 interest:1 pc:1 bundle:1 accurate:1 integral:2 partial:1 tree:2 loosely:1 initialized:1 re:2 girshick:1 yjn:6 classify:1 modeling:4 measuring:1 assignment:1 maximization:2 cost:1 entry:2 subset:1 recognizing:1 examining:1 combined:1 st:1 person:1 international:4 probabilistic:2 xi1:2 together:3 connecting:1 central:1 choose:2 li:3 potential:10 de:1 unordered:1 sec:2 forsyth:2 satisfy:1 explicitly:2 try:2 lot:1 wind:1 yv:1 sort:1 annotation:103 simon:2 contribution:1 bright:1 greg:1 largely:1 who:2 correspond:2 climbing:1 raw:1 produced:1 published:1 explain:2 simultaneous:1 flickr:1 definition:1 obvious:1 associated:12 hamming:1 static:1 dataset:4 popular:2 recall:1 color:4 car:2 dimensionality:2 segmentation:7 sophisticated:1 feed:1 higher:1 supervised:1 zisserman:2 done:1 correlation:1 replacing:1 defines:1 quality:1 believe:2 name:6 building:1 concept:1 y2:1 verify:5 contain:1 normalized:2 hence:2 assigned:1 iteratively:1 round:1 during:4 generalized:1 mallet:1 complete:1 demonstrate:2 motion:1 image:117 meaning:1 novel:1 recently:1 sailing:2 volume:2 extend:1 association:1 refer:1 cambridge:1 automatic:2 unconstrained:1 similarly:4 illinois:1 rowing:1 access:2 etc:5 xi3:2 closest:1 recent:2 irrelevant:1 scenario:1 codeword:3 certain:1 binary:1 yi:1 scoring:1 guestrin:1 minimum:1 additional:1 floor:1 semi:1 ii:2 rj:2 infer:1 champaign:1 segmented:3 match:2 offer:2 retrieval:1 fraser:2 variant:2 involving:2 regression:1 vision:18 croquet:1 lossaugmented:1 annotate:2 represent:2 sponding:1 iteration:1 achieved:1 separately:1 winn:1 sailboat:2 modality:1 extra:1 sure:1 validating:1 incorporates:1 effectiveness:1 seem:1 call:5 integer:2 structural:3 jordan:1 yang:3 presence:1 iii:2 easy:9 restrict:1 inner:1 idea:1 translates:2 inactive:3 hxn:1 ultimate:1 effort:1 action:5 rowboat:1 enumerate:2 ignored:2 detailed:1 involve:1 extensively:1 clip:1 svms:3 category:10 visualized:1 revisit:1 xi4:2 per:1 broadly:1 discrete:1 group:2 putting:1 four:7 key:1 lan:1 localize:1 graph:1 relaxation:4 subgradient:3 run:4 groundtruth:1 sfu:1 loeff:2 lsvm:1 comparable:1 capturing:1 internet:1 correspondence:5 fan:1 quadratic:2 activity:3 constraint:8 handful:1 fei:10 scene:49 x2:1 athlete:6 tag:4 min:2 duygulu:1 subgradients:1 department:1 structured:4 according:2 manning:1 nmi:5 increasingly:1 lp:8 maxy:1 intuitively:1 indexing:1 ceiling:1 mori:6 ln:5 visualization:2 count:4 xi2:2 ilp:3 end:1 available:1 eight:8 generic:1 enforce:1 appearing:1 occurrence:4 fowlkes:1 alternative:1 denotes:3 remaining:1 ensure:1 clustering:12 top:3 graphical:2 running:1 xc:1 exploit:1 build:2 society:8 objective:3 added:1 codewords:4 unclear:1 september:1 cw:3 separate:1 mapped:7 distance:1 entity:1 concatenation:1 polo:2 topic:1 collected:1 water:3 relationship:9 illustration:2 reformulate:1 nc:7 equivalently:2 relate:1 perform:1 markov:2 urbana:1 supporting:2 defining:1 looking:1 y1:1 rn:5 verb:1 community:1 pair:1 dog:2 learned:4 textual:22 distinction:1 address:1 beyond:2 below:2 pattern:10 challenge:1 program:2 adjective:2 max:22 video:1 gool:1 event:1 treated:1 difficulty:1 predicting:2 indicator:1 localizes:1 improve:1 movie:1 picture:4 axis:2 extract:1 text:3 understanding:7 literature:2 discovery:1 interdependent:1 loss:5 discriminatively:2 mixed:1 limitation:1 validation:5 incurred:1 article:3 classifying:2 course:2 free:1 truncation:1 aij:5 face:3 felzenszwalb:2 van:1 calculated:2 vocabulary:2 xn:8 ignores:1 author:1 made:1 collection:3 transaction:2 implicitly:2 cutting:1 preferred:3 active:5 corpus:1 discriminative:11 xi:5 latent:21 decomposes:1 table:5 learn:5 ca:1 complex:1 european:3 domain:1 did:1 main:2 whole:1 fair:1 complementary:1 x1:1 augmented:2 fig:6 position:4 pv:2 explicit:1 crude:1 third:1 learns:1 xic:4 maxi:2 list:2 experimented:1 svm:15 dk:1 evidence:2 consist:4 socher:3 workshop:1 texture:1 occurring:1 margin:5 chen:1 entropy:1 photograph:1 simply:1 appearance:1 cheaply:1 visual:22 sport:16 partially:1 maxw:1 corresponds:2 truth:9 extracted:1 conditional:1 goal:4 formulated:1 viewed:1 towards:3 barnard:3 absence:1 hard:2 youtube:2 determined:1 except:1 reducing:1 wordnet:1 total:4 called:1 experimental:1 indicating:1 formally:1 select:1 berg:3 people:1 noisier:1 incorporate:2 d1:1
3,277
3,969
Structured Determinantal Point Processes Alex Kulesza Ben Taskar Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 {kulesza,taskar}@cis.upenn.edu Abstract We present a novel probabilistic model for distributions over sets of structures? for example, sets of sequences, trees, or graphs. The critical characteristic of our model is a preference for diversity: sets containing dissimilar structures are more likely. Our model is a marriage of structured probabilistic models, like Markov random fields and context free grammars, with determinantal point processes, which arise in quantum physics as models of particles with repulsive interactions. We extend the determinantal point process model to handle an exponentially-sized set of particles (structures) via a natural factorization of the model into parts. We show how this factorization leads to tractable algorithms for exact inference, including computing marginals, computing conditional probabilities, and sampling. Our algorithms exploit a novel polynomially-sized dual representation of determinantal point processes, and use message passing over a special semiring to compute relevant quantities. We illustrate the advantages of the model on tracking and articulated pose estimation problems. 1 Introduction The need for distributions over sets of structures arises frequently in computer vision, computational biology, and natural language processing. For example, in multiple target tracking, sets of structures of interest are multiple object trajectories [6]. In gene finding, sets of structures of interest are multiple proteins coded by a single gene via alternative splicing [13]. In machine translation, sets of structures of interest are multiple interpretations or parses of a sentence in a different language [12]. Consider as a running example the problem of detecting and tracking several objects of the same type (e.g., cars, people, faces) in a video, assuming the number of objects is not known a priori. We would like a distribution over sets of trajectories that (1) includes sets of different cardinality and (2) prefers sets of trajectories that are spread out in space-time, as objects are likely to be [11, 15]. Determinantal point processes [10] are attractive models for distributions over sets, because they concisely capture probabilistic mutual exclusion between items via a kernel matrix that determines which items are similar and therefore less likely to appear together. Intuitively, the model balances the diversity of a set against the quality of the items it contains (for example, observation likelihood of an object along the trajectory, or motion smoothness). Remarkably, algorithms for computing certain marginal and conditional probabilities as well as sampling from this model are O(N 3 ), where N is total number of possible items, even though there are 2N possible subsets of a set of size N [7, 1] . The problem, however, is that in our setting the total number of possible trajectories N is exponential in the number of time steps. More generally, we consider modeling distributions over sets of structures (e.g., sequences, trees, graphs) where the total number of possible structures is exponential. Our structured determinatal point process model (SDPP) captures such distributions by combining structured probabilistic models (e.g., a Markov random field to model individual trajectory quality) 1 ?3 Probability 20 x 10 10 0 DPP 0 Independent Position Step 0 10 (a) Position Step 1 10 Position Step 2 1 (b) Figure 1: (a) A set of points in the plane drawn from a DPP (left), and the same number of points sampled independently (right). (b) The first three steps of sampling a DPP on a set of onedimensional particle positions, from left to right. Red circles indicate already selected positions. The DPP naturally reduces the probabilities for positions that are similar to those already selected. with determinantal point processes. We introduce a natural factorization of the determinantal model into parts (as in graphical models and grammars), and show that this factorization together with a novel dual representation of the process enables tractable inference and sampling using message passing algorithms over a special semiring. The contributions of this paper are: (1) introducing SDPPs, (2) a concise dual representation of determinantal processes, (3) tractable message passing algorithms for exact inference and sampling in SDPPs, (4) experimental validation on synthetic motion tracking and real-world pose detection problems. The paper is organized as follows: we present background on determinantal processes in Section 2 and introduce our model in Section 3; we develop inference and sampling algorithms in Section 4, and we describe experiments in Section 5. 2 Background: determinantal point processes A point process P on a discrete set Y = {y1 , . . . , yN } is a probability measure on 2Y , the set of all subsets of Y. P is called a determinantal point process (DPP) if there exists a positive semidefinite matrix K indexed by the elements of Y such that if Y ? P then for every A ? Y, we have Determinantal Point Process: P(A ? Y ) = det(KA ) . (1) Here KA = [Kij ]yi ,yj ?A is the restriction of K to the entries indexed by elements of A, and we adopt det(K? ) = 1. We will refer to K as the marginal kernel, as it contains all the information needed to compute the probability of including any subset A in Y ? P. A few simple observations follow from Equation (1): P(yi ? Y ) P(yi , yj ? Y ) = Kii (2) = Kii Kjj ? Kij Kji = P(yi ? Y )P(yj ? Y ) ? 2 Kij . (3) That is, the diagonal of K gives the marginal probabilities of inclusion for individual elements of Y, and the off-diagonal elements determine the (anti-) correlations between pairs of elements: large values of Kij imply that i and j tend not to co-occur. Note that DPPs cannot represent distributions where elements are more likely to co-occur than if they were independent: correlations are negative. Figure 1a shows the difference between sampling a set of points in the plane using a DPP (with Kij inversely related to the distance between points i and j), which leads to a set that is spread out with good coverage, and sampling points independently, where the points exhibit random clumping. Determinantal point processes, introduced to model fermions [10], also arise in studies of nonintersecting random paths, random spanning trees, and eigenvalues of random matrices [3, 2, 7]. The most relevant construction of DPPs for our purpose is via L-ensembles [1]. An L-ensemble defines a DPP via a positive semidefinite matrix L indexed by the elements of Y. PL (Y ) = L-ensemble DPP: det(LY ) , det(L + I) (4) where I is the N ? N identity matrix. Note that PL is normalized due to the identity P Y ?Y det(LY ) = det(L+I). L-ensembles directly define the probability of observing each subset 2 of Y, and subsets that have higher diversity (as measured by the corresponding determinant) have higher likelihood. To get probabilities of item co-occurrence as in Equation (1), we can compute the marginal kernel K for the L-ensemble PL : K = (L + I)?1 L. (5) PN > Note that K can be computed from the eigen-decomposition of L = k=1 ?k vk vk by a simple PN k vk vk> . re-scaling of eigenvalues: K = k=1 ?k?+1 L-ensemble marginal kernel: To get a better understanding of how L affects marginals K, note that L can be written as a Gram matrix with L(yi , yj ) = q(yi )?(yi )> ?(yj )q(yj ) for q(yi ) ? 0 and some ?feature mapping? ?(y) : Y 7? RD , where D ? N and ||?(yi )||2 = 1. We can think of q(yi ) as the ?quality score? for item yi and ?(yi )> ?(yj ) as normalized ?similarity? between items yi and yj . Y q 2 (yi ) , (6) L-ensemble (L=quality*similarity): PL (Y ) ? det(?(Y )> ?(Y )) yi ?Y where ?(Y ) is a D ? |Y | matrix with columns ?(yi ), yi ? Y . We will use this quality*similarity based representation extensively below. Roughly speaking, PL (yi ? Y ) increases monotonically with quality q(yi ) and PL (yi , yj ? Y ) decreases monotonically with similarity ?(yi )> ?(yj ). We briefly mention a few other efficiently computable quantities of DPPs [1]: L-ensemble conditionals: PL (Y = A ? B | A ? Y ) = det(LA?B ) , det(L + IY\A ) (7) where IY\A is the matrix with ones in the diagonal entries indexed by elements of Y \ A and zeros everywhere else. Conditional marginal probabilities PL (B ? Y | A ? Y ) as well as inclusion/exclusion probabilities PL (A ? Y ? B ? Y = ?) can also be computed efficiently using eigen-decompositions of L and related matrices. Sampling PN > Sampling from PL is also efficient [7]. Let L = k=1 ?k vk vk be an orthonormal eigendecomposition, and let ei be the ith standard basis N -vector (all zeros except for a 1 in the ith position). Then the following algorithm samples Y ? PL : Initialize: Y = ?, V = ?; k ; Add each eigenvector v k to V independently with prob. ?k?+1 while |V | > 0 do P Select a yi from Y with Pr(yi ) = |V1 | v?V (v > ei )2 ; Update Y = Y ? yi ; Compute V? , an orthonormal basis for the subspace of V orthogonal to ei , and let V = V? ; end Return Y ; Algorithm 1: Sampling algorithm for L-ensemble DPPs. This yields a natural and efficient procedure for sampling from P given an eigen-decomposition of L. It also offers some additional insights. Because the dimension of V is reduced by one on each iteration of the loop, and because the initial dimension of V is simply the number of selected eigenvectors in step one, the size of Y is distributed as the number of successes in N Bernoulli trials k where trial k succeeds with probability ?k?+1 . In particular, |Y | cannot be larger than rank(L), and PN ?k E[|Y |] = k=1 ?k +1 . To get a feel for the sampling algorithm, it is useful to visualize the distributions used to select yi at each time step, and to see how they are influenced by previously chosen items. Figure 1b shows this progression for a simple DPP where Y is the set of points in [0, 1], quality scores are uniformly 1, and the feature mapping is such that ?(yi )> ?(yj ) ? exp(?(yi ? yj )2 )?that is, points are more similar the closer together they are. Initially, the eigenvectors V give rise to a fairly uniform distribution over points in Y, but as each successive point is selected and V is updated, the distribution shifts to avoid points near those already chosen. 3 Symbol Y, Y, yi , N L, LY K, KA q(yi ), ?(yi ) B, C ?, yi? , y? Meaning Y is the base set, Y is a subset of Y, yi is an element of Y, N is the size of |Y| L is a p.s.d. matrix defining P(Y ) ? det(LY ), LY is a submatrix indexed by Y K is a p.s.d. matrix defining marginals via P(A ? Y ) = det(KA ) quality*similarity decomposition; Lij = q(yi )?(yi )> ?(yj )q(yj ), ?(yj ) ? RD C = BB > is the dual of L = B > B; the columns of B are Bi = q(yi )?(yi ) ? is a factor of a structure; yi? , y? index the relevant part of the structure Table 1: Summary of notation. 3 Structured determinantal point processes DPPs are amazingly tractable distributions when N , the size of the base set Y, is small. However, we are interested in defining DPPs over exponentially sized Y. For example, consider the case where each yi is itself a sequence of length T : yi = (yi1 , . . . , yiT ), where yit is the state at time t (e.g., the location of an object in the t-th frame of a video). Assuming there are n states at each time t and all state transitions are possible, there are nT possible sequences, so N = nT . In order to define a DPP over structures such as sequences or trees, we assume a factorization of the quality score q(yi ) and similarity score ?(yi )> ?(yj ) into parts, similar to a graphical model decomposition. For a sequence, the scores can be naturally decomposed into factors that depend on the state yit at each time t and the states (yit , yit+1 ) for each transition (t, t + 1). More generally, we assume a set of factors and use the notation yi? to refer to the ? part of the structure yi (similarly, we use y? to refer to the ? part of the structure y). We assume that quality decomposes multiplicatively and similarity decomposes additively, as follows. (As before, L(yi , yj ) = q(yi )?(yi )> ?(yj )q(yj ).) X Y ?(yi? ). (8) q(yi? ) and ?(yi ) = Structured DPP Factorization: q(yi ) = ? ? We argue that these are quite natural factorizations. Quality scores, for example, can be given by a typical log-linear Markov random field, which defines a multiplicative distribution over structures. Similarity scores can be thought of as dot products between features of the two labelings. In our tracking example, the feature mapping ?(yit ) should reflect similarity between trajectories; e.g., features could track coarse-level position at time t, so that the model considers sets with trajectories that pass near or through the same states less likely. A common problem in multiple target tracking is that the quality of one object?s trajectory and its neighborhood ?tube? is often much more likely than other objects? trajectories as measured by an HMM or CRF model, so standard sampling from a graphical model will produce very similar, overlapping trajectories, ignoring less ?detectable? targets. A sample from the structured DPP model would be much more likely to contain diverse trajectories. (See Figure 2.) Dual representation While the factorization in Equation (8) concisely defines a DPP over a structured Y, the more remarkable fact is that it gives rise to tractable algorithms for computing key marginals and conditionals when the set of factors is low-treewidth, just as in graphical model inference [8], even though L is too large to even write down. We propose the following dual representation of L in order to exploit the factorization. Let us define a D ? N matrix B whose columns are given by Bi = q(yi )?(yi ), so that L = B > B. Consider the D ? D matrix C = BB > ; note that typically D  N (actually, the rank of B is at most O(nT ) in the sequence case). and L are identical, and P The eigenvalues of CP the eigenvectors are related as follows: if C = k ?k vk vk> , then L = k ?k (B > vk )> (B > vk ). That is, if vk is the k-th eigenvector of C, B > vk is the k-th eigenvector of L, and it has the same eigenvalue ?k . This connection allows us to compute important quantities from C. Q For example, to compute the L-ensemble normalization det(L + I) = k (?k + 1) in Equation (4), we just need the eigenvalues of C. To compute C itself, we need to compute BB > = P 2 > q (y )?(y )?(y ) . This appears daunting, but the factorization turns out to offer an efficient i i i yi dynamic programming solution. We discuss in more detail how to compute C for sequences (and for fixed-treewidth factors in general) in the next section. Assuming we can compute C efficiently, 4 Sampled particle trajectories (position vs. time) 50 50 SDPP 50 40 40 40 30 30 30 20 20 20 10 10 10 Independent 10 20 30 40 50 10 20 30 40 50 50 50 50 40 40 40 30 30 30 20 20 20 10 10 10 20 30 40 50 10 20 30 40 50 10 20 30 40 50 10 10 20 30 40 50 Figure 2: Sets of (structured) particle trajectories sampled from the SDPP (top row) and independently using only quality scores (bottom row). The curves to the left indicate the quality scores for the possible initial positions. P we can eigen-decompose it as C = k ?k vk vk> in O(D3 ). Then, to compute PL (yi ? Y ), the probability of any single trajectory being included in Y ? PL , we have all we need: X ?k X ?k Structured Marginal: Kii = (Bi> vk )2 = q 2 (yi ) (?(yi )> vk )2 (9) ?k + 1 ?k + 1 k k 2 , where: Similarly, given two trajectories yi and yj , PL (yi , yj ? Y ) = Kii Kjj ? Kij X ?k X ?k (B > vk )(Bj> vk ) = q(yi )q(yj ) (?(yi )> vk )(?(yj )> vk ) . Kij = ?k + 1 i ?k + 1 4 (10) k k Inference for SDPPs We now turn to computing C using the factorization in Equation (8). We have ! ! !> X X Y X X 2 2 > q (y? ) ?(y? ) ?(y? ) . C= q (y)?(y)?(y) = y?Y y?Y ? ? (11) ? Q If we think of q 2 (y? ) as factor potentials of a graphical model p(y) ? ? q 2 (y? ), then computing C is equivalent to computing second moments of additive features (modulo normalization Z). A naive algorithm can simply compute all O(T 2P ) pairwise P marginals p(y? , y?0 ) and, by linearity of expectation, add up the contributions: C = Z ?,?0 y? ,y?0 p(y? , y?0 )?(y? )?(y?0 )> . However, we can use a much more efficient O(D2 T ) algorithm based on second-order semiring message passing [9]. The details are given in Appendix A of the supplementary material, but in short we apply the standard two-pass belief propagation algorithm for trees with a particular semiring in place of the usual sum-product or max-sum. By performing message passing under this second-order semiring, one can efficiently compute any quantity of the form: ! ! ! X X Y X a(y? ) b(y? ) (12) p(y? ) y?Y ? ? ? for functions p ? 0, a, and b in time O(T ). Since the outer product in Equation (11) comprises D2 quantities of the type in Equation (12), we can compute C in time O(D2 T ). Sampling As described in Section 3, the eigen-decomposition of C yields an implicit representation of L: for each eigenvalue/vector pair (?k , vk ) of C, (?k , B > vk ) is a corresponding pair for L. We show that this implicit representation is enough to efficiently perform the sampling procedure in Algorithm 1. 5 The key is to represent V , the orthonormal set of vectors in RN , as a set V? of vectors in RD , with the mapping V = {B > v|v ? V? }. Let vi , vj be two arbitrary vectors in V? . Then we have (B > vi )> (B > vj ) = vi> BB > vj = vi> Cvj . Thus we can compute dot products between vectors in V using their preimage in V? . This is sufficient to compute the normalization for each eigenvector B > v, as required to obtain an initial orthonormal basis. Trivially, we can also compute (implicit) sums between vectors in V ; this combined with dot products is enough to perform the Gram-Schmidt orthonormalization needed to obtain V?? from V? and the most recently selected yi at each iteration. All that remains, then, is to choose a structure yi according to the distribution Pr(yi ) = P 1/|V? | v?V? ((B > v)> ei )2 . Recall that the columns of B are given by Bi = q(yi )?(yi ). Thus the distribution can be rewritten as 1 X 2 Pr(yi ) = q (yi )(v > ?(yi ))2 . (13) |V? | v?V? By assumption q (yi ) decomposes multiplicatively over parts of yi , and v > ?(yi ) decomposes additively. Thus the distribution is a sum of |V? | terms, each having the form of Equation (12). We can therefore apply message passing in the second-order semiring to compute marginals of this distribution?that is, for each part y? we can compute X 1 X q 2 (y)(v > ?(y))2 , (14) ?| | V y?y? ? 2 v?V where the sum is over all structures consistent with the value of y? . This only takes O(T |V? |) time. In fact, the message-passing computation of these marginals yields an efficient algorithm for sampling individual full structures yi as required by Algorithm 1; the key is to pass normal messages forward, but conditional messages backward. Suppose we have a sequence model; since the forward pass completes with correct marginals at the final node, we can correctly sample its value before any backwards messages are sent. Once the value of the final node is fixed, we pass a conditional message backwards; that is, we send zeros for all values other than the one just selected. This results in condtional marginals at the penultimate node. We can then conditionally sample its value, and repeat this process until all nodes have been assigned. Furthermore, by applying the second-order semiring we are able to sample from a distribution quite different from that of a traditional graphical model. The algorithm is described in more detail in Appendix B of the supplementary material. 5 Experiments We begin with a synthetic motion tracking task, where the goal is to follow a collection of particles as they travel in a one-dimensional space over time. This is the structured analog of the setting shown in Figure 1b, where elements of Y are no longer single positions in [0, 1], but are now sequences of such positions over many time periods. For our experiments, we modeled paths yi over T = 50 time steps, where at each time t a particle can be in one of 50 discretized positions, yit ? {1, . . . , 50}. 50 The total number of possible trajectories is thus 5050 , and there are 250 possible sets of trajectories. While a real tracking problem would involve quality scores q(y) that depend on some observations, e.g., measurements over time from a set of physical sensors, for simplicity we determine the quality of a trajectory using only its starting position and a measure of smoothness over time: q(y) = QT q(y1 ) t=2 q(yt?1 , yt ). The initial quality scores q(y1 ) depicted on the left of Figure 2 are high in the middle with secondary modes on each side. The transition quality is given by q(yt?1 , yt ) = f (yt?1 ? yt ), where f is the density function of the zero-mean Gaussian with unit variance. We scale the quality scores so that the expected number of selected trajectories is 5. We want trajectories to be considered similar if they travel through similar positions, so we define PT a 50-dimensional feature vector ?(y) = t=1 ?(yt ) where ?r (yt ) ? f (i ? yt ) for r = 1, . . . , 50. Intuitively, feature r is activated when the trajectory passes near position r, so trajectories passing through nearby positions will activate the same features and thus appear similar. Figure 2 shows the results of applying our SDPP sampling algorithm to this setting. Sets of trajectories drawn independently according to quality score tend to cluster in the middle region (second 6 row). The SDPP samples, however, are more diverse, tending to cover more of the space while still respecting the quality scores?they are still smooth, and still tend to start near the middle position. Pose estimation To demonstrate that SDPPs effectively model characteristics of real-world data, we apply them to a multiple-person pose estimation task. Our dataset consists of 73 still frames taken from various TV shows, each approximately 720 by 540 pixels in size1 . As much as possible, the selected frames contain three or more people at similar scale, all facing the camera and without serious occlusions. Sample images from the dataset are shown in Figure 4. The task is to identify the location and pose of each person in the image. For our purposes, each pose is a structure containing four parts (head, torso, right arm, and left arm), each of which takes a value consisting of a pixel location and an orientation (one of 24 discretized angles). There are approximately 75,000 possible such values for each part, so there are about 475,000 possible poses. Each image was labeled by hand for evaluation. We use a standard pictorial strucure model [4, 5], treating each pose as a two-level tree with the torso as the root and the head and arms as leaves. Our quality scores areQ derived fromQ[14]; they factorize across the nodes (body parts) P and edges (joints) J as q(y) = ?( p?P q(yp ) pp0 ?J q(yp , yp0 ))? . ? is a scale parameter that controls the expected number of poses in each sample, and ? is a sharpness parameter that we found helpful in controlling the impact of the quality scores. (We set parameter values using a held-out training set; see below.) Each part receives a quality score q(yp ) given by a customized part detector previously trained on similar images. The joint quality score q(yp , yp0 ) is given by a Gaussian ?spring? that encourages, for example, the left arm to begin near the left shoulder. Full details of the quality terms are provided in [14]. Given our data, we want to discourage the model from selecting overlapping poses, so we design our similarity featuresP spatially. We define an evenly spaced 8 by 4 grid of reference points x1 , . . . , x32 , and use ?(y) = p?P ?(yp ), where ?r (yp ) ? f (kyp ? xr k2 /?). Recall that f is the standard normal density function, and kyp ? xr k2 is the distance between the position of part p (ignoring angle) and the reference point xr . The parameter ? controls the width of the kernel. Poses that occupy the same part of the image will be near the same reference points, and thus appear similar. We compare our model against two baselines. The first is an independent model which draws poses independently according to the distribution obtained by normalizing the quality scores. The second is a simple non-maxima suppression model that iteratively selects successive poses using the normalized quality scores, but under the hard constraint that they do not overlap with any previously selected pose. (Poses overlap if they cover any of the same pixels when rendered.) In both cases, the number of poses is given by a draw from the SDPP model, ensuring no systematic bias. We split our data randomly into a training set of 13 images and a test set of 60 images. Using the training set, we select values for ?, ?, and ? that optimize overall F1 score at radius 100 (see below), as well as distinct optimal values of ? for the baselines. (? and ? are irrelevant for the baselines.) We then use each model to sample 10 sets of poses for each test image, or 600 samples per model. For each sample, we compute precision, recall, and F1 score. For our purposes, precision is the fraction of predicted parts where both endpoints are within a particular radius of the endpoints of an expert-labeled part of the same type (head, left arm, etc.). Correspondingly, recall is the fraction of expert-labeled parts within a given radius of a predicted part of the same type. Since our SDPP model encourages diversity, we expect to see improvements in recall at the expense of precision. F1 score is the harmonic mean of precision and recall. We compute all metrics separately for each sample, and then average the results across samples and images in the test set. The results over several different radii are shown in Figure 3a. At tight tolerances the SDPP performs comparably to the independent samples (perhaps because the quality scores are only accurate at the mode, so diverse samples are not close enough to be valuable). As the radius increases, however, the SDPP obtains significantly better results, outperforming both baselines. Figure 3b shows the curves for the arms alone; the arms tend to be more difficult to locate accurately. Figure 3c shows the precision/recall obtained by each model. As expected, the SDPP model achieves its improved F1 score by increasing recall at the cost of precision. 1 The images and code from [14] are available at http://www.vision.grasp.upenn.edu/video 7 Overall F1 Arms F1 0.6 0.6 0.5 0.5 0.4 0.4 0.3 SDPP Non?max Indep. 0.2 0.1 40 60 80 100 120 Match radius (in pixels) (a) 140 Precision / recall (circles) 0.8 0.6 0.4 0.3 0.2 0.1 40 0.2 60 80 100 120 Match radius (in pixels) 140 (b) 40 60 80 100 120 Match radius (in pixels) 140 (c) Figure 3: Results for pose estimation. The horizontal axis gives the distance threshold used to determine whether two parts are successfully matched. 95% confidence intervals are shown. Figure 4: Structured marginals for the pose estimation task on successive steps of the sampling algorithm, with already selected poses superimposed. Input images are shown on the left. For illustration, we show the sampling process for a few images in Figure 4. As in Figure 1b, the SDPP efficiently discounts poses that are similar to those already selected. 6 Conclusion We introduced the structured determinantal point process (SDPP), a probabilistic model over sets of structures such as sequences, trees, or graphs. We showed the intuitive ?diversification? properties of the SDPP, and developed efficient message-passing algorithms to perform inference through a dual characterization of the standard DPP and a natural factorization. Acknowledgments The authors were partially supported by NSF Grant 0803256. 8 References [1] A. Borodin. Determinantal point processes, 2009. [2] A. Borodin and A. Soshnikov. Janossy densities. I. Determinantal ensembles. Journal of Statistical Physics, 113(3):595?610, 2003. [3] D. Daley and D. Vere-Jones. An introduction to the theory of point processes: volume I: elementary theory and methods. Springer, 2003. [4] P. Felzenszwalb and D. Huttenlocher. Pictorial structures for object recognition. International Journal of Computer Vision, 61(1):55?79, 2005. [5] M. Fischler and R. Elschlager. The representation and matching of pictorial structures. IEEE Transactions on Computers, 100(22), 1973. [6] D. Forsyth and J. Ponce. Computer Vision: A Modern Approach. Prentice Hall, 2003. [7] J. Hough, M. Krishnapur, Y. Peres, and B. Vir?ag. Determinantal processes and independence. Probability Surveys, 3:206?229, 2006. [8] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT Press, 2009. [9] Z. Li and J. Eisner. First-and second-order expectation semirings with applications to minimum-risk training on translation forests. In Proc. EMNLP, 2009. [10] O. Macchi. The coincidence approach to stochastic point processes. Advances in Applied Probability, 7(1):83?122, 1975. [11] J. MacCormick and A. Blake. A probabilistic exclusion principle for tracking multiple objects. International Journal of Computer Vision, 39(1):57?71, 2000. [12] C. D. Manning and H. Sch?utze. Foundations of Statistical Natural Language Processing. MIT Press, Boston, MA, 1999. [13] T. Nilsen and B. Graveley. Expansion of the eukaryotic proteome by alternative splicing. Nature, 463(7280):457?463, 2010. [14] B. Sapp, C. Jordan, and B. Taskar. Adaptive pose priors for pictorial structures. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR?10), 2010. [15] T. Zhao and R. Nevatia. Tracking multiple humans in complex situations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26:1208?1221, 2004. 9
3969 |@word trial:2 determinant:1 middle:3 briefly:1 d2:3 additively:2 decomposition:6 concise:1 mention:1 moment:1 initial:4 contains:2 score:25 selecting:1 ka:4 nt:3 written:1 vere:1 determinantal:18 additive:1 enables:1 treating:1 update:1 clumping:1 v:1 alone:1 intelligence:1 selected:11 leaf:1 item:8 plane:2 yi1:1 ith:2 short:1 detecting:1 coarse:1 node:5 location:3 preference:1 successive:3 characterization:1 along:1 fermion:1 consists:1 introduce:2 pairwise:1 upenn:2 expected:3 roughly:1 frequently:1 discretized:2 decomposed:1 cardinality:1 increasing:1 begin:2 provided:1 notation:2 linearity:1 matched:1 eigenvector:4 developed:1 finding:1 ag:1 every:1 k2:2 vir:1 control:2 unit:1 ly:5 grant:1 appear:3 yn:1 before:2 positive:2 path:2 approximately:2 condtional:1 co:3 factorization:12 bi:4 acknowledgment:1 camera:1 yj:23 orthonormalization:1 xr:3 procedure:2 thought:1 significantly:1 matching:1 confidence:1 protein:1 proteome:1 get:3 cannot:2 close:1 prentice:1 context:1 applying:2 risk:1 restriction:1 equivalent:1 optimize:1 www:1 yt:9 send:1 starting:1 independently:6 survey:1 sharpness:1 simplicity:1 x32:1 insight:1 orthonormal:4 handle:1 feel:1 updated:1 target:3 construction:1 suppose:1 modulo:1 exact:2 programming:1 pt:1 controlling:1 pa:1 element:10 recognition:2 labeled:3 huttenlocher:1 bottom:1 taskar:3 coincidence:1 capture:2 region:1 indep:1 decrease:1 valuable:1 respecting:1 fischler:1 dynamic:1 trained:1 depend:2 tight:1 basis:3 joint:2 various:1 articulated:1 distinct:1 describe:1 activate:1 neighborhood:1 quite:2 whose:1 larger:1 supplementary:2 cvpr:1 grammar:2 think:2 itself:2 final:2 sequence:11 eigenvalue:6 advantage:1 pp0:1 propose:1 interaction:1 product:5 relevant:3 combining:1 loop:1 sdpp:14 intuitive:1 cluster:1 produce:1 ben:1 object:10 illustrate:1 develop:1 pose:22 measured:2 qt:1 coverage:1 predicted:2 indicate:2 treewidth:2 radius:8 correct:1 stochastic:1 human:1 material:2 kii:4 f1:6 decompose:1 elementary:1 pl:14 marriage:1 considered:1 blake:1 normal:2 exp:1 kyp:2 hall:1 mapping:4 bj:1 visualize:1 achieves:1 adopt:1 utze:1 purpose:3 estimation:5 proc:1 travel:2 successfully:1 mit:2 sensor:1 gaussian:2 pn:4 avoid:1 derived:1 ponce:1 vk:22 improvement:1 bernoulli:1 likelihood:2 rank:2 superimposed:1 suppression:1 baseline:4 helpful:1 inference:7 typically:1 initially:1 koller:1 labelings:1 interested:1 selects:1 pixel:6 overall:2 dual:7 orientation:1 priori:1 special:2 initialize:1 mutual:1 marginal:7 field:3 fairly:1 once:1 having:1 sampling:20 biology:1 identical:1 jones:1 serious:1 few:3 modern:1 randomly:1 individual:3 pictorial:4 occlusion:1 consisting:1 friedman:1 detection:1 interest:3 message:12 evaluation:1 grasp:1 semidefinite:2 semirings:1 activated:1 amazingly:1 held:1 accurate:1 edge:1 closer:1 orthogonal:1 tree:7 indexed:5 hough:1 circle:2 re:1 kij:7 column:4 modeling:1 cover:2 cost:1 introducing:1 subset:6 entry:2 uniform:1 too:1 synthetic:2 combined:1 person:2 density:3 international:2 probabilistic:7 physic:2 off:1 systematic:1 together:3 iy:2 reflect:1 tube:1 janossy:1 containing:2 choose:1 emnlp:1 expert:2 zhao:1 return:1 yp:6 li:1 nevatia:1 potential:1 diversity:4 includes:1 forsyth:1 vi:4 multiplicative:1 root:1 observing:1 red:1 start:1 contribution:2 variance:1 characteristic:2 efficiently:6 ensemble:11 yield:3 identify:1 spaced:1 accurately:1 comparably:1 trajectory:24 detector:1 influenced:1 against:2 naturally:2 sampled:3 dataset:2 size1:1 recall:9 car:1 sapp:1 torso:2 organized:1 actually:1 appears:1 higher:2 follow:2 improved:1 daunting:1 though:2 furthermore:1 just:3 implicit:3 correlation:2 until:1 hand:1 receives:1 horizontal:1 ei:4 overlapping:2 propagation:1 defines:3 mode:2 quality:29 perhaps:1 preimage:1 normalized:3 contain:2 assigned:1 spatially:1 iteratively:1 attractive:1 conditionally:1 width:1 encourages:2 crf:1 demonstrate:1 performs:1 motion:3 cp:1 meaning:1 image:12 harmonic:1 novel:3 recently:1 common:1 tending:1 physical:1 endpoint:2 exponentially:2 volume:1 extend:1 interpretation:1 analog:1 marginals:10 onedimensional:1 refer:3 measurement:1 smoothness:2 dpps:6 rd:3 trivially:1 grid:1 similarly:2 inclusion:2 particle:7 language:3 dot:3 similarity:10 longer:1 etc:1 add:2 base:2 exclusion:3 showed:1 irrelevant:1 diversification:1 certain:1 outperforming:1 success:1 yi:73 minimum:1 additional:1 determine:3 period:1 monotonically:2 multiple:8 full:2 reduces:1 smooth:1 match:3 offer:2 coded:1 impact:1 ensuring:1 vision:6 expectation:2 metric:1 iteration:2 kernel:5 represent:2 normalization:3 background:2 remarkably:1 conditionals:2 want:2 separately:1 interval:1 else:1 completes:1 sch:1 pass:1 tend:4 sent:1 jordan:1 near:6 backwards:2 split:1 enough:3 affect:1 independence:1 pennsylvania:1 computable:1 det:12 shift:1 whether:1 passing:9 speaking:1 prefers:1 generally:2 useful:1 eigenvectors:3 involve:1 discount:1 extensively:1 reduced:1 http:1 occupy:1 nsf:1 track:1 correctly:1 per:1 diverse:3 discrete:1 write:1 key:3 four:1 threshold:1 drawn:2 yit:7 d3:1 backward:1 v1:1 graph:3 fraction:2 sum:5 prob:1 everywhere:1 angle:2 place:1 splicing:2 draw:2 appendix:2 kji:1 scaling:1 submatrix:1 occur:2 constraint:1 alex:1 nearby:1 spring:1 performing:1 rendered:1 structured:13 department:1 according:3 tv:1 manning:1 across:2 intuitively:2 pr:3 macchi:1 taken:1 equation:8 previously:3 remains:1 turn:2 detectable:1 discus:1 needed:2 tractable:5 end:1 repulsive:1 available:1 rewritten:1 apply:3 progression:1 occurrence:1 alternative:2 schmidt:1 eigen:5 top:1 running:1 graphical:7 exploit:2 eisner:1 society:1 already:5 quantity:5 usual:1 diagonal:3 traditional:1 exhibit:1 subspace:1 distance:3 maccormick:1 penultimate:1 hmm:1 outer:1 evenly:1 argue:1 considers:1 spanning:1 assuming:3 length:1 code:1 index:1 modeled:1 multiplicatively:2 illustration:1 balance:1 difficult:1 expense:1 negative:1 rise:2 design:1 perform:3 observation:3 markov:3 anti:1 situation:1 defining:3 shoulder:1 head:3 peres:1 y1:3 frame:3 rn:1 locate:1 arbitrary:1 introduced:2 semiring:7 pair:3 required:2 sentence:1 connection:1 concisely:2 able:1 below:3 pattern:2 borodin:2 kulesza:2 including:2 max:2 video:3 belief:1 critical:1 overlap:2 natural:7 customized:1 arm:8 yp0:2 imply:1 inversely:1 axis:1 naive:1 philadelphia:1 lij:1 prior:1 understanding:1 par:1 expect:1 facing:1 remarkable:1 validation:1 eigendecomposition:1 foundation:1 sufficient:1 consistent:1 principle:2 elschlager:1 translation:2 row:3 summary:1 repeat:1 supported:1 free:1 side:1 bias:1 face:1 correspondingly:1 felzenszwalb:1 distributed:1 tolerance:1 dpp:14 dimension:2 curve:2 world:2 gram:2 transition:3 quantum:1 forward:2 collection:1 author:1 adaptive:1 polynomially:1 transaction:2 bb:4 obtains:1 gene:2 krishnapur:1 factorize:1 decomposes:4 table:1 nature:1 ignoring:2 forest:1 kjj:2 expansion:1 complex:1 discourage:1 eukaryotic:1 vj:3 spread:2 arise:2 body:1 x1:1 precision:7 position:19 comprises:1 daley:1 exponential:2 down:1 symbol:1 normalizing:1 exists:1 effectively:1 ci:1 boston:1 depicted:1 simply:2 likely:7 tracking:10 partially:1 springer:1 determines:1 ma:1 conditional:5 identity:2 sized:3 goal:1 hard:1 included:1 typical:1 except:1 uniformly:1 total:4 called:1 pas:5 secondary:1 experimental:1 la:1 succeeds:1 select:3 people:2 arises:1 dissimilar:1
3,278
397
Integrated Segmentation and Recognition of Hand-Printed Numerals James D. Keeler? David E. Rumelhart MCC Psychology Department 3500 W. Balcones Ctr. Dr. Stanford University Austin, TX 78759 Stanford, CA 94305 Wee-Kheng Leow MCC and University of Texas Austin, TX 78759 Abstract Neural network algorithms have proven useful for recognition of individual, segmented characters. However, their recognition accuracy has been limited by the accuracy of the underlying segmentation algorithm. Conventional, rule-based segmentation algorithms encounter difficulty if the characters are touching, broken, or noisy. The problem in these situations is that often one cannot properly segment a character until it is recognized yet one cannot properly recognize a character until it is segmented. We present here a neural network algorithm that simultaneously segments and recognizes in an integrated system. This algorithm has several novel features: it uses a supervised learning algorithm (backpropagation), but is able to take position-independent information as targets and self-organize the activities of the units in a competitive fashion to infer the positional information. We demonstrate this ability with overlapping hand-printed numerals. 1 INTRODUCTION A major problem with standard backpropagation algorithms for pattern recognition is that they seem to require carefully segmented and localized input patterns for training. This is a problem for two reasons: first, it is often a labor intensive task to provide this information and second, the decision as to how to segment often depends on prior recognition. However, we describe below a neural network design and corresponding backpropagation learning algorithm that learns to simultane?Reprint requests: Jim Keeler, [email protected] or [email protected] 557 558 Keeler, Rumelhart, and Leow ously segment and identify a pattern. 1 There are two important aspects to many pattern recognition problems that we have built directly into our network and learning algorithm. The first is that the exact location of the pattern, in space or time, is irrelevant to the classification of the pattern; it should be recognized as a member of the same class wherever or whenever it occurs. This suggests that we build translation independence directly into our network. The second aspect is that feedback about whether or not a pattern of a particular class is present is all that should be required for training; information about the exact location and relationship to other patterns should not be required. The target information, thus, does not include information about where the patterns occur, only about whether a particular pattern occurs. We have incorporated two design principles into our network to deal with these problems. The first is to build translation independence into the network by using linked local receptive fields. The second is to build a fixed "forward model" (c.f. Jordan and Rumelhart, 1990) which translates a location-specific recognition process into a location-independent output value. This output gives rise to a nonspecific error signal which is propagated back through this fixed network to train the underlying location-specific network. 2 NETWORK ARCHITECTURE AND ALGORITHM The basic organization of the network is illustrated in Figure 1. In the case of character recognition, the input consists of a set of pixels over which the stimulus patterns are displayed. We designate the stimulus pattern by the vector X. In general, we assume that any character can be presented in any position and that characters may overlap. The input image then projects to a set of hidden units which learn to abstract features from the input field. These feature abstraction units are organized into sheets, one for each feature type. Each unit within a sheet is constrained to have the same weights as every other unit in the sheet (to enforce translational invariance). This is the same method used by Rumelhart, Hinton and Williams (1986) in solving the so-called TIC problem and the one used by LeCun et aI. (1990) in their work on ZIP-code recognition. We let the activation value of hidden unit of type i at location i be a logistic sigmoidal function of its net input and designate it hi;. We interpret ~; as the probability that feature Ii is present in the input at position j. The hidden units then project onto a set of sheets of position-specific character recognition units, one sheet for each character type. These units have exponential activation functions and each unit in the sheet receives inputs from a local receptive field block of feature detection units as shown in Figure 1. As with the hidden units, the weights in each exponential unit sheet are linked, enforcing translational invariance. We designate as Xi,- the activation of the unit for detecting character i at location j, and define lThe algorithm and network design presented here was first proposed by Rumelhart in a presentation entitled "Learning and Generalization in Multilayer networks" given at the NATO Advanced Research Workshop on Neuro Computing, Algorithms, Architectures and Applications held in Lea Arcs, France in February, 1989. The algorithm can be viewed as a generalization and refinement of the TDNN of Lang, Hinton, && Waibel, 1990. Integrated Segmentation and Recognition of Hand-ftinted Numerals Is: Outputs: i Net input: i 'Ixy N ='" ~ 'T\ k::l x' ? p . II Xl =e xy i xy ixy = s. I 1+S i k Wkx'y' hx'y' Character Detectors: Sheets of exponential units Unear units: Targets S=l:xi i xy Feature Detectors: Sigmoidal Hiddens: k hx' Y' .. --~~ .. --~~--~ Sheets of sigmoidal units with linked local receptive fields. ~ ,~'-"'~~~~__~,; Grey-scale Input image Figure 1: The Integrated Segementation and Recognition (ISR) network. The input image may contain several characters and is presented to the network in a twodimensional array of grey-scale values. Units in the first block h!,y' have linked-local receptive fields to the input image, and detect features of type k. The exponential units in the next block receive inputs from a local receptive field of hidden sigmoidal units. The weights W~:~y' connect the hidden unit h!,y' to the exponential unit X~tI' The architecture enforces translational invariance across the sheets of units by linking the weights and shifting the receptive fields in each dimension. Finally, the activity in each individual sheet of exponential units is summed by the linear units Si and converted to a probability Pi. The two-dimensional input image can be thought of as a one-dimensional vector X as discussed in the text. For notational convenience we used one-dimensional indices (j) in the text rather than two-dimensional (xy) as shown in the figure. All of the mathematics goes through if one replaces i +-+ 2:y. 559 560 Keeler, Rumelhart, and Leow Xi; = e'1ii, where the net input to the unit is 'Ii; = L Wilehle; + Pi (1) Ie and Wile is the weight from hidden unit hie; to the detector Xi;. As we argue in Keeler Rumelhart and Leow (1991), 'Ii; can usefully be interpreted as the logarithm of the likelihood ratio favoring the hypothesis that a character of type i is at location i of the input field. Since Xi; is the exponential of 'Ii;, the X units are to be interpreted as representing the likelihood ratios themselves. Thus, we can interpret the output of the X units directly as the evidence favoring the assumption that there is a character of a particular type at a particular location. If we were willing and able to carefully segment the input and tell the network the exact location of each character, we could use a standard training technique to train the network to recognize characters at any location with any degree of overlap. However, we are interested in a training algorithm in which we don't have to provide the network with such specific training information. We are interested in simply telling the network which characters are present in the input - not where each character is. This approach saves tremendous time and effort in data preparation and labeling. To implement this idea, we have built an additional network which takes the output of the X units and computes, through a fixed output network, the probability that a given character is present anywhere in the input field. We do this by adding two additional layers of units. The first layer of units, the S units, simply sum the activity of each sheet of the X units. The activity of unit Si can, under certain assumptions, be interpreted as the likelihood ratio that a character of type i occurred anywhere in the input field. Finally in the output layer, we convert the likelihood ratio into a probability by the formula Si Pi = 1 + Si . (2) Thus, Pi is interpreted as representing directly the probability that character i occurred in the input field. 2.1 The learning Rule On having set up our network, it is straight-forward to compute the derivative of the error function with respect to 'lii. We get a particularly simple learning rule if we let the objective function be the cross-entropy function, l = L tilnPi + (1 - ti)ln(1 - pd (3) i where ti equals 1 if character i is presented and zero otherwise. In this case, we get the following rule: ~ a'li; = (ti - Pi) Xi; Lie Xile . (4) It should be noted that this is a kind of competitive rule in which the learning is proportional to the relative strength of the activation at the unit at a particular location in the X layer to the strength of activation in the entire layer. This is valid if we assume that either the character appears exactly once or not at all. This ratio is the conditional probability that the target was at position i under the assumption that the target was, in fact, presented. It is also possible to derive a learning rule for the case where more than one of the same character is present[3]. Integrated Segmentation and Recognition of Hand-l\'inted Numerals 3 EXPERIMENTAL RESULTS To investigate the ability of this network to simultaneously segment and recognize characters in an integrated system, we trained the network outlined in section 2 on a database of hand-printed numerals taken from financial documents. We used a training and test set of about 9,000 and 1,800 characters respectively. We placed pairs of these grey-scaled characters on the input plane at positions determined by a distance parameter which tells how far apart to place the centers of the characters. We used a distance parameter of 1.2 which indicates that the centers were about 1.2 characters apart with an added random displacement in the x and y dimensions by ?.25 and ?0.15 of the leftmost character size respectively. With these parameters, the characters touch or overlap about 15% of the time. The network had 10 output units and the target was to tUrn on the units of the two characters in the input window, regardless of what order or position they occurred in. Thus the pair (3,5) has the same target as (5,3): target = (0001010000). ?or I ? E:XP , E:XP 1 UP 2 rxp 3 rxp .. EXP 5 EXP , EXP 7 EXP a EXP 9 ..:..:.:.:....:.;.::.:.:<.' :/::.:.:..:.;:...::.:.:...':....:.??? 4 Figure 2.: The ISR network's performance. This figure shows two touching characters (06, shown at left) in the input image and the corresponding activation of the sheets of exponential units. The network was never trained on these particular characters individually or as a pair, but gets the correct activation of greater than 0.9 on the 6 and 0.8 on the 0 with near 0.0 activation for all other outputs . Note the sharp peaks of activity in the 0 and 6 layers approximetely above the center of the characters, even though they are touching. In this case the maximum activity of the 6-sheet was about 14,000 and had to be scaled by a factor of about 70 to fit in the graph space. The maximum activity in the 0 sheet was approximately 196. 561 562 Keeler, Rumelhart, and Leow After training on several hundred thousand of the randomly sampled pairs of numbers from the 9,000, the network generalized correctly on about 81% of the pairs. This pair accuracy corresponds to a single-character recognition accuracy of about 90%. The network recognizes isolated single characters at an accuracy of about 95%. Note that this is an artificially generated data set, and by changing the distance parameter we can make the problem as simple or as difficult as we desire, up to the point where the characters overlap so much that a human cannot recognize them. Most conventional segmentation algorithms do not deal with touching characters, and so would presumably miss the vast majority of these characters. To see how overlap affects performance, we tested generalization in the same network on 100 pairs with the distance parameter lowered to 1.0 and 0.95. With a distance parameter of 1.0, the characters touch or overlap about 50% of the time. Of those, the network correctly identified 80%. Of the 20% that were missed, about 1/2 were unrecognizable by a human. With a distance parameter of 0.95, causing about 66% of the characters to touch, about 74% are correctly identified. As one expects, performance drops for smaller distance parameters. The qualitative behavior of this system is quite interesting. As described in section 2, the learning rule for the exponential units contains a term that is competitive in nature. This term favors "winner-take-all" behavior for the units in that sheet in the sense that nearly equal activations are unstable under the learning rule: if one presents the same pattern again and again, the learning rule will cause one activation to grow or shrink away from the other at an exponetial rate. This causes selforganization to occur in the exponential sheets, and we would expect the exponential units to organize into highly-localized activations or "spikes" of activity on the appropriate exponential layers directly above the input characters. This is exactly the behavior that is observed in the trained network, as exemplified in Figure 2. In this figure we see two overlapping characters in the input image (06). The network generalized properly with output activity of about 0.8 for zero 0.99 for 6 and about 0.0 for everything else. Note that in the exponential layer, there are very sharp spikes of activity directly above the 0 and the 6 in the appropriate layers. Indeed, it has been our experience that even with quite noisy input images, the representation in the exponential layer is very localized, and we could presumably recover the positional information by examining the activity of the exponential units. We can thus think of these spikes in the exponential layer as "smart histograms": the exponential units in each sheet learn to look for specific combinations of features in the input layer and reject other combinations of inputs. This allows them to respond correctly even if there is a significant amount of noise in the input image, or if the characters happen to be touching or broken. 4 DISCUSSION The system presented here demonstrates that neural networks can, in fact, be used for segmentation as well as recognition. We have by no means demonstrated that this method is better than conventional segmentation/recognition systems in overall performance. However, most conventional systems cannot deal with touching, broken, or noisy characters very well at all, whereas the present system handles all of these cases and recognition in a single, integrated fashion. This approach not only offers an integrated solution to the problems at hand, it also has the properties Integrated Segmentation and Recognition of Hand-ftinted Numerals of being translation invariant, trainable with minimal information, and could be implemented in hardware for extremely fast feed-forward performance. Note that the architecture discussed here is similar in some respects to the neocognitron model of Fukushima (1980). However, the system is different in several important aspects. First of all, the features here are learned through backpropagation rather than hand-coded as in the neocognitron. Second, the neural network selforganizes positional information via localized activation in the exponential layers. Third, the network is all feed-forward in its run-time dynamics. Finally, it is worth pointing out that there are other aspects of the problem that we have not dealt with: Our network was trained on approximately the same size characters - to within 40% in height and no normalization in the x-dimension. We have not dealt here with the aspects of normalization, attentional focusing, or recovery of positional information, all of which would be needed in a functioning system. Acknowledgements We thank Peter Robinson from NCR Waterloo for providing the training data and Eric Hartman, Carsten Peterson, Richard Durbin, and Charles Rosenburg for useful discussions. References [11 K. Fukushima. (1980) Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybern. 36, 193-202. [21 M. I. Jordan and D. E. Rumelhart (1990) Forward models: Supervised Learning with a Distal Teacher. MIT Center for Cognitive Science, Occasional paper # 40. [31 J. Keeler, D. E. Rumelhart and W. K. Leow. (1991) Integrated Segmentation and Recognition of Hand-Printed Numerals. MCC Technical Report A CT-NN-1O. 91 [41 K. Lang, A. Waibel and G. Hinton. (1990) A Time Delay Neural Network Architecture for Isolated Word Recognition. Neural Networks, 3 23-44. [51 Y. Le Cun, B. Boser, J.S. Denker, S. Solla, R. Howard, and L. Jackel. (1990) Back-Propagation applied to Handwritten Zipcode Recognition. Neural Computation 1(4):541-551. [61 D.E. Rumelhart, G.E.Hinton and R.J.Williams (1986), "Learning Internal Representations by Error Propagation," in D.E.Rumelhart, J.L.McClelland and the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations, Cambridge, MA: MIT Press/Bradford. 563
397 |@word selforganization:1 willing:1 grey:3 leow:6 contains:1 document:1 com:2 activation:12 yet:1 lang:2 si:4 happen:1 kheng:1 drop:1 plane:1 detecting:1 location:12 sigmoidal:4 height:1 qualitative:1 consists:1 indeed:1 behavior:3 themselves:1 window:1 project:2 underlying:2 what:1 tic:1 kind:1 interpreted:4 every:1 ti:4 usefully:1 exactly:2 scaled:2 demonstrates:1 unit:44 organize:2 local:5 approximately:2 suggests:1 limited:1 lecun:1 enforces:1 block:3 implement:1 backpropagation:4 displacement:1 mcc:5 thought:1 printed:4 reject:1 word:1 get:3 cannot:4 onto:1 convenience:1 sheet:18 twodimensional:1 cybern:1 conventional:4 demonstrated:1 center:4 nonspecific:1 williams:2 go:1 regardless:1 recovery:1 rule:9 array:1 financial:1 handle:1 target:8 exact:3 us:1 hypothesis:1 rumelhart:12 recognition:22 particularly:1 database:1 observed:1 thousand:1 solla:1 xile:1 pd:1 broken:3 dynamic:1 trained:4 solving:1 segment:6 smart:1 eric:1 ously:1 tx:2 train:2 fast:1 describe:1 tell:2 labeling:1 quite:2 stanford:2 tested:1 otherwise:1 ability:2 favor:1 hartman:1 think:1 noisy:3 zipcode:1 net:3 causing:1 organizing:1 unrecognizable:1 derive:1 implemented:1 correct:1 exploration:1 human:2 everything:1 numeral:7 require:1 hx:2 microstructure:1 generalization:3 biological:1 segementation:1 designate:3 keeler:8 exp:5 presumably:2 cognition:1 pointing:1 major:1 jackel:1 waterloo:1 individually:1 mit:2 rather:2 properly:3 notational:1 likelihood:4 indicates:1 detect:1 sense:1 abstraction:1 nn:1 integrated:10 entire:1 hidden:7 favoring:2 france:1 interested:2 pixel:1 translational:3 classification:1 overall:1 wkx:1 constrained:1 summed:1 field:11 equal:2 once:1 having:1 never:1 look:1 nearly:1 report:1 stimulus:2 richard:1 randomly:1 wee:1 simultaneously:2 recognize:4 individual:2 fukushima:2 detection:1 organization:1 investigate:1 highly:1 held:1 xy:4 experience:1 logarithm:1 isolated:2 minimal:1 expects:1 hundred:1 delay:1 examining:1 connect:1 teacher:1 hiddens:1 peak:1 ie:1 ctr:1 again:2 dr:1 cognitive:1 lii:1 derivative:1 li:1 converted:1 depends:1 linked:4 competitive:3 recover:1 parallel:1 accuracy:5 identify:1 dealt:2 handwritten:1 worth:1 straight:1 unaffected:1 detector:3 whenever:1 james:1 propagated:1 sampled:1 segmentation:10 organized:1 carefully:2 back:2 appears:1 feed:2 focusing:1 supervised:2 though:1 shrink:1 anywhere:2 until:2 hand:9 receives:1 touch:3 overlapping:2 propagation:2 logistic:1 contain:1 functioning:1 illustrated:1 deal:3 distal:1 self:2 noted:1 ixy:2 leftmost:1 generalized:2 neocognitron:3 demonstrate:1 image:9 novel:1 charles:1 winner:1 volume:1 linking:1 discussed:2 hie:1 occurred:3 interpret:2 significant:1 cambridge:1 ai:1 outlined:1 mathematics:1 had:2 lowered:1 touching:6 irrelevant:1 apart:2 certain:1 entitled:1 additional:2 greater:1 zip:1 recognized:2 signal:1 ii:6 infer:1 segmented:3 technical:1 cross:1 offer:1 coded:1 neuro:1 basic:1 multilayer:1 histogram:1 normalization:2 lea:1 receive:1 whereas:1 else:1 grow:1 member:1 seem:1 jordan:2 near:1 independence:2 fit:1 psychology:1 affect:1 architecture:5 identified:2 idea:1 translates:1 intensive:1 texas:1 shift:1 whether:2 effort:1 peter:1 cause:2 useful:2 amount:1 hardware:1 mcclelland:1 correctly:4 group:1 changing:1 vast:1 graph:1 sum:1 convert:1 run:1 respond:1 place:1 missed:1 decision:1 layer:13 hi:1 ct:1 replaces:1 durbin:1 activity:11 strength:2 occur:2 aspect:5 extremely:1 balcones:1 department:1 waibel:2 request:1 combination:2 across:1 smaller:1 character:47 cun:1 wherever:1 wile:1 invariant:1 taken:1 ln:1 turn:1 mechanism:1 needed:1 denker:1 occasional:1 away:1 enforce:1 appropriate:2 save:1 encounter:1 include:1 recognizes:2 build:3 february:1 objective:1 added:1 occurs:2 spike:3 receptive:6 distance:7 attentional:1 thank:1 majority:1 argue:1 lthe:1 unstable:1 reason:1 enforcing:1 code:1 index:1 relationship:1 ratio:5 providing:1 difficult:1 rise:1 design:3 arc:1 howard:1 displayed:1 situation:1 hinton:4 incorporated:1 jim:1 pdp:1 sharp:2 david:1 pair:7 required:2 learned:1 boser:1 tremendous:1 robinson:1 able:2 below:1 pattern:14 exemplified:1 built:2 shifting:1 overlap:6 difficulty:1 advanced:1 representing:2 reprint:1 tdnn:1 text:2 prior:1 acknowledgement:1 relative:1 expect:1 interesting:1 proportional:1 proven:1 localized:4 foundation:1 degree:1 xp:2 principle:1 pi:5 translation:3 austin:2 placed:1 telling:1 peterson:1 isr:2 distributed:1 feedback:1 dimension:3 valid:1 computes:1 forward:5 refinement:1 far:1 nato:1 xi:6 don:1 learn:2 nature:1 ca:1 artificially:1 noise:1 fashion:2 position:8 exponential:18 xl:1 lie:1 third:1 learns:1 formula:1 specific:5 evidence:1 workshop:1 adding:1 entropy:1 simply:2 positional:4 labor:1 desire:1 corresponds:1 ma:1 conditional:1 viewed:1 presentation:1 carsten:1 determined:1 miss:1 called:1 bradford:1 invariance:3 experimental:1 ncr:1 internal:1 preparation:1 trainable:1
3,279
3,970
Sodium entry efficiency during action potentials: A novel single-parameter family of Hodgkin-Huxley models Renaud Jolivet? Institute of Pharmacology and Toxicology University of Z?urich, Z?urich, Switzerland [email protected] Anand Singh Institute of Pharmacology and Toxicology University of Z?urich, Z?urich, Switzerland [email protected] Pierre J. Magistretti? Brain Mind Institute EPFL, Lausanne, Switzerland [email protected] Bruno Weber Institute of Pharmacology and Toxicology University of Z?urich, Z?urich, Switzerland [email protected] Abstract Sodium entry during an action potential determines the energy efficiency of a neuron. The classic Hodgkin-Huxley model of action potential generation is notoriously inefficient in that regard with about 4 times more charges flowing through the membrane than the theoretical minimum required to achieve the observed depolarization. Yet, recent experimental results show that mammalian neurons are close to the optimal metabolic efficiency and that the dynamics of their voltage-gated channels is significantly different than the one exhibited by the classic Hodgkin-Huxley model during the action potential. Nevertheless, the original Hodgkin-Huxley model is still widely used and rarely to model the squid giant axon from which it was extracted. Here, we introduce a novel family of HodgkinHuxley models that correctly account for sodium entry, action potential width and whose voltage-gated channels display a dynamics very similar to the most recent experimental observations in mammalian neurons. We speak here about a family of models because the model is parameterized by a unique parameter the variations of which allow to reproduce the entire range of experimental observations from cortical pyramidal neurons to Purkinje cells, yielding a very economical framework to model a wide range of different central neurons. The present paper demonstrates the performances and discuss the properties of this new family of models. 1 Introduction Action potentials play the central role in neuron-to-neuron communication. At the onset of an action potential, the change in the membrane potential leads to opening of voltage-gated sodium channels, leading to influx of sodium ions. Once the membrane is sufficiently depolarized, the opening of voltage-gated potassium channels leads to an efflux of potassium ions and brings the membrane back to the resting potential. During and after this process, the ionic gradients are restored by the Na,K-ATPase electrogenic pump which extrudes 3 sodium ions in exchange for 2 potassium ions and requires 1 ATP molecule per cycle. ? ? Contact author. Second affiliation: Center for Psychiatric Neuroscience, University of Lausanne, Lausanne, Switzerland. 1 There is thus a metabolic cost in terms of ATP molecules to be spent associated with every action potential. This metabolic cost can be roughly estimated to be 1/3 of the sodium entry into the neuron. A metabolically efficient action potential would have sodium entry restricted to the rising phase of the action potential so that a minimal number of charges is transported to produce the observed voltage change. This can be encapsulated into a measure called Sodium Entry Ratio (SER) defined as the integral of the sodium current during the action potential divided by the product of the membrane capacitance by the observed change in membrane voltage. A metabolically optimally efficient neuron would have a SER of 1 or close to 1. The metabolic efficiency critically depends on the gating kinetics of the voltage-dependent channels and on their interaction during the action potential. All biophysical models of action potential generation rely on the framework originally established by Hodgkin and Huxley [1] and certain models in use today still rely on their parameters for the voltage-gated sodium and potassium channels responsible for the action potential generation, even though parameterization of the Hodgkin-Huxley model optimized for certain mammalian neurons have been available and used for years [2,3]. Analyzing the squid giant axon action potential, Hodgkin and Huxley established that the SER is approximately 4, owing to the fact that the sodium channels remain open during the falling phase of the action potential [1]. This has led to the idea that action potentials are metabolically inefficient and these numbers were used as key input in a number of studies aiming at establishing an energy budget for brain tissue (see e.g. [4]). However, two recent studies have demonstrated that mammalian neurons, having fundamentally similar action potentials as the squid giant axon, are significantly more efficient owing to lesser sodium entry during the falling phase of the action potential [5,6]. In the first study, Alle and colleagues observed that action potentials in mossy fiber boutons of hippocampal granule neurons have about 30% extra sodium entry than the theoretical minimum [5] (SER ' 1.3). In the second study, Carter and Bean expanded this finding, showing that different central neurons have different SERs [6]. More specifically, they measured that cortical pyramidal neurons are the most efficient with a SER ' 1.2 while pyramidal neurons from the CA1 hippocampus region have a SER ' 1.6. On the other hand, inhibitory neurons were found to have less efficient action potentials with cerebellar Purkinje neurons having a SER ' 2 and cortical basket cell interneurons having a SER ' 2. Interestingly, this is postulated to originate in the type or distribution of voltage-gated potassium channels present in each of these cell types. Even the less efficient neurons are twice more metabolically efficient than the original Hodgkin-Huxley neuron. These recent findings call for a revision of the original Hodgkin-Huxley model which fails on several accounts to describe accurately central mammalian neurons. The aim of the present work is to formulate an in silico model for an accurate description of the sodium and potassium currents underlying the generation of action potentials in central mammalian neurons. To this end, we introduce a novel family of Hodgkin-Huxley models HH? parameterized by a single parameter ?. Varying ? in a meaningful range allows reproducing the whole range of observations of Carter and Bean [6] providing a very economic modeling strategy that can be used to model a wide range of central neurons from cortical pyramidal neurons to Purkinje cells. The next section provides a brief description of the model, of the strategy to design it as well as a formal definition of the key parameters like the Sodium Entry Ratio against which the predictions of our family of models is compared. The third section demonstrates the performances of the novel family of models and characterize its properties. Finally the last section discusses the implications of our results. 2 2.1 Model and methods Hodgkin-Huxley model family In order to develop a novel family of Hodgkin-Huxley models, we started from the original HodgkinHuxley formalism [1]. In this formalism, the evolution of the membrane voltage V is governed by C X dV =? Ik + Iext dt k 2 (1) with C the membrane capacitance and Iext an externally applied current. The currents Ik are transmembrane ionic currents. Following the credo, they are described by X ? Ik = gNa m3 h (V ? ENa ) + gK n4 (V ? EK ) + gL (V ? EL ) (2) k with gNa , gK and gL the ionic conductances and ENa , EK and EL the reversal potentials associated with the sodium current iNa = gNa m3 h (V ? ENa ), the potassium current iK = gK n4 (V ? EK ) and the uncharacterized leak current iL = gL (V ?EL ). All three gating variables m, n and h follow the generic equation dx = ?x (V ) (1 ? x) ? ?x (V ) x (3) dt with x standing alternatively for m, n or h. The terms ?x and ?x are non-trivial functions of the voltage V . It is sometimes useful to reformulate Eq. 3 as 1 dx =? (x ? x? (V )) dt ?x (V ) (4) in which the equilibrium value x? = ?x /(?x +?x ) is reached with the time constant ?x = 1/(?x + ?x ) which has units of [ms]. Specific values for the constants (C, gx and Ex ) and for the functions ?x and ?x were originally chosen to match those introduced in [7] with the exception that the model introduced in [7] includes a secondary potassium channel that was abandoned here, thus retaining only the channels originally described by Hodgkin and Huxley. The reversal potentials Ex were then adjusted to match known concentrations of the respective ions in and around mammalian cells. We then proceeded to explore the behavior of the model and observed that the specific dynamics of iNa and iK during an action potential is critically dependent on the exact definition of ?n . In our case, ?n is defined by p1 V ? p2 ?n (V ) = (5) 1 ? e?(p3 V ?p4 )/p5 with p1 , . . ., p5 some parameters. More specifically, we observed that by varying p5 in a meaningful range, we could reproduce qualitatively the observations of Carter and Bean [6] regarding the dynamics of the sodium current iNa during individual action potentials. Building on these premises, we set p5 = ? with ? varying in the range 10.5 ? ? ? 16. These boundary values were chosen relatively arbitrarily by exploring the range in which the models stay close to experimental observations. All the other parameters appearing in the ?x and ?x functions were then optimized using a standard optimization algorithm so that the model reproduces as closely as possible the values characterizing action potential dynamics as reported in [6]. The final values for parameters of the novel family of Hodgkin-Huxley models are reported in Table 1. The values of other parameters used in the model are: C = 1.0 ?F/cm2 , gL = 0.25 mS/cm2 , EL = ?70 mV. Table 1: The novel family of Hodgkin-Huxley models HH? ?x ?x channel variable Na m 41.3 V ?3051 ?77.46 1?exp(? V 13.27 ) 1.2499 exp(V /42.129) h 0.0036 V exp( 24.965 ) 10.405 V ?26.181 exp(? 1.02415.488 )+1 n 0.992 V ?96.73 1?exp(? 1.042 V ??97.517 ) 0.0159 exp(V /21.964) K The voltage V is expressed in mV. 3 gx (mS/cm2 ) Ex (mV) 112.7 50 224.6 -85.0 (b) 40 (c) 40 width = 1.30 [ms] SER =2.67 0 voltage [mV] voltage [mV] ?80 0 ?80 30 35 currents [?A/cm2] 40 500 35 time [ms] 40 40 500 0 ?500 30 0 30 35 currents [?A/cm2] voltage [mV] ?80 35 width = 0.40 [ms] SER =1.91 0 30 40 width = 0.60 [ms] SER =1.55 0 ?500 30 35 time [ms] 40 40 500 currents [?A/cm2] (a) ?500 30 35 time [ms] 40 Figure 1: Dynamics of the membrane voltage V (top; black line), of the sodium current iNa (bottom; green line), of the potassium current iK (bottom; blue line) and of the total current C dV /dt (bottom; red line; see Eqs. 1-2) upon stimulation by a superthreshold pulse of current (cyan area; Iext = 25.5 ?A/cm2 for 1 ms). In each panel, SER stands for Sodium Entry Ratio (see Eq. 6) and ?width? indicates the width of the action potential measured at the position indicated by the cyan arrow (see ?Sodium entry ratio and numerics? subsection). (a) ? = 10.5. (b) ? = 13.5. (c) ? = 16.0. 2.2 Sodium entry ratio and numerics The relevant parameters to compare the novel family of Hodgkin-Huxley models HH? to the experimental dataset under consideration are: (i) the action potential peak, (ii) the action potential width and (iii) the sodium entry ratio (SER). The action potential peak is simply defined as the maximal depolarization reached during the action potential. Following [6], the action potential width is measured at half the action potential height, measured as the difference in membrane potential from the peak to the resting potential. Finally, still following [6], the SER is defined for an isolated action potential by Z SER = iNa/C?V (6) with ?V the change in voltage during the action potential measured from the action potential threshold ? to its peak. The action potential threshold ? was defined as 1% of the maximal dV /dt. All simulations were implemented in MATLAB (The Mathworks, Natick MA). The system of equations was integrated using a solver for stiff problems and a time step of 0.05 ms. 3 Results Recent experimental results suggest that the dynamics of the action potential generating voltagegated channels in the classical Hodgkin-Huxley model do not correctly reproduce what is observed in mammalian neurons [5,6]. More specifically, the Hodgkin-Huxley equations generate a sodium current with a characteristic secondary peak during the action potential decaying phase, leading to a very important influx of sodium ions that counter the effect of potassium ions making the model metabolically inefficient [1]. Mammalian neurons display a sodium current with a unique sharp peak or at most a low amplitude secondary peak [5,6]. 4 3 SER 2.5 2 1.5 1 0.5 0 0.5 1 AP width [ms] 1.5 Figure 2: Predictions of our model family are compared to the experimentally observed correlation between the action potential width and the SER. Experimental observations (red squares) are adapted from [4]. Data were collected for (from left to right): Purkinje cells, cortical interneurons, CA1 pyramidal neurons and cortical pyramidal neurons. Error bars stand for the standard deviation. The red line is a simple linear regression through the experimental data (R2 = 0.99). The predictions of our model (black squares) are indicated for decreasing values of ? from left (? = 16) to right (? = 10.5). In the precedent section, we have introduced a novel family of models HH? parameterized by the unique parameter ? (see Table 1). We will now show how varying ? allows reproducing the wide range of dynamics observed experimentally. Figure 1 shows the behavior of HH? during an isolated action potential for three different values of ?. In all three cases, the action potential is triggered by the same unique square pulse of current generating an isolated action potential with roughly the same latency about 4 s after the end of the stimulating pulse. Yet the behavior of the model is very different in each case. For low values of ?, the sodium current iNa exhibits a single very sharp peak, being almost null after the action potential has peaked. At high values of ?, iNa exhibits a distinctive secondary peak after the action potential has peaked. The potassium current iK is also much bigger in the latter case. As a consequence, the model has a low Sodium Entry Ratio (SER) at low values of ? and a high SER at high values of ? (see Eq. 6). We also observe a negative correlation between ? and the width of the action potential. The width of action potentials decreases when ? increases. Finally action potentials generated at low ? values return to the resting potential from above while action potentials generated at high ? values exhibit an after-hyperpolarization. These different instances of our family of models HH? cover all the experimentally observed behaviors as reported in [6] (compare with Figures 1-3 therein). Indeed, Carter and Bean observed neurons with low SER, broad action potentials and a single sharp peak in the sodium current dynamics (cortical and CA1 pyramidal neurons). They also observed neurons with high SER, narrow action potentials and a distinctive secondary peak in the sodium current dynamics during the action potential decaying phase (cortical interneurons and cerebellar Purkinje cells). Figure 2 compares the predictions of our model family with the observations reported in [6]. It clearly demonstrates that by varying ?, our model family is able to capture the whole range of observed behaviors and quantitatively fits the measured SER and action potential widths. We also observe a faint positive correlation between the action potential width and its peak like in [6] (not shown). While the dynamics of gating variables is traditionally formulated in terms of ?x and ?x functions (see Eq. 3), it is convenient to reformulate the governing equation in the form of Eq. 4, yielding for 5 x infinity m n h 1 ? = 16 ? = 12 ? = 14 ? = 10 0 50 ? = 10 ?x [ms] ? = 12 ? = 14 ? = 16 0 -100 0 +100 membrane potential [mV] Figure 3: Equilibrium function x? (top) and time constant ?x (bottom) as a function of the membrane voltage for different values of ? for the gating variables m (red line), h (green line) and n (dotted blue lines). each gating variable an equilibrium value x? (V ) and a time constant ?x (V ). Figure 3 shows x? and ?x for all three gating variables of the model as a function of the membrane voltage V , the variable opening the sodium channel m, the variable closing the sodium channel h and the variable associated with the potassium channel n. With increment in the value of ?, the asymptotic value n? shifts towards lower membrane potentials, in other words for the same membrane voltage, the equilibrium value is higher. On the opposite, with increment in the value of ?, the time constant ?x is reduced in the range [?40; +40] mV. In summary, at low ? values, the potassium current iK is only activated when the membrane potential is high and it kicks in slowly. At high ? values, iK is activated earlier in the action potential and kicks in faster. This supports remarkably well the arguments of Carter and Bean to explain the relative metabolic inefficiency of GABAergic neurons. Indeed, fast-spiking neurons with narrow action potentials use fast-activating Kv3 channels to repolarize the membrane. It is postulated that, in these cells, recovery begins sooner and from more hyperpolarized voltages in remarkable agreement with the evolution of n? and ?n in our modeling framework. It is also interesting to note that Kv3 channels enable fast spiking [8]. This is supposedly due to incomplete sodium channel inactivation and to earlier recovery, in effect speeding recovery and reducing the refractory period. Finally, Figure 4 shows the membrane voltage V when the model is subjected to a constant input as well as the corresponding gain functions or frequency versus current curves. The f ? I curve has the typical saturating profile observed for many neurons [9] and all the models start spiking at a non-zero frequency. In line with the idea that neurons with a sharp action potential and incomplete inactivation of sodium channels can spike faster, the discharge frequency increases with the value of ? for a given input current. 4 Discussion Recent experimental results have highlighted that the original Hodgkin-Huxley model [1] is not particularly well suited to describe the dynamics of sodium and potassium voltage-gated channels during the course of an action potential in mammalian neurons. The Hodgkin-Huxley model is also a poor foundation for studies dedicated to computing an energy budget for the mammalian brain since it severely overestimates the metabolic cost associated with action potentials by at least a factor of 2. Despite that, the Hodgkin-Huxley model is still widely used and often for modeling projects specifically targeting the mammalian brain. 6 (a) (b) 0.2 xsi = 12 xsi = 13 xsi = 14 xsi = 15 xsi = 16 0.18 0.16 Iapp = 4 0.14 f [kHz] 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 Iapp = 12 0.12 0.1 Iapp = 20 0.08 0.06 Iapp = 28 0.04 0.02 0 0 0 5 10 15 20 25 30 time [ms] 35 Iapp [?A/cm2] Figure 4: Gain functions and spike trains elicited by constant input. (a) The gain function (f ? I curve) is plotted for different values of the parameter ?. The models were stimulated with a constant current input of 5 sec after an initial 30 ms pulse. (b) Sample spike trains for ? = 14 for different values of the externally applied current Iext . Here we have introduced a novel instance of the Hodgkin-Huxley model aimed at correcting these issues. The proposed family of models uses the original equations of Hodgkin and Huxley as they were formulated originally but introduces new expressions for the functions ?x and ?x that characterize the dynamics of the gating variables m, n and h. Moreover, the specific expression for ?n depends on an extra parameter ?. By varying ? in a specific range, our family of models is able to quantitatively reproduce a wide range of dynamics for the voltage-gated sodium and potassium channels during individual action potentials. Our family of models is able to generate broad, metabolically efficient action potentials with a sharp single peak dynamics of the sodium current as well as narrow, metabolically inefficient action potentials with incomplete inactivation of the sodium channels during the decaying phase of the action potential. These different behaviors cover neuron types as different as cortical pyramidal neurons, cortical interneurons or Purkinje cells. For this study we chose a single-compartment Hodgkin-Huxley-type model because it is well suited to compare with the experimental conditions of Carter and Bean [6]. However, when comparing the particular parameterization of the model that is achieved here and experimental data (see Figure 2), it suggests that other changes, e.g. in sodium channel inactivation, may help to explain the differences between different cell types. It should also be noted that action potentials as narrow as 250 ?s can be as energy-efficient (SER = 1.3) [10] as the widest action potentials measured by Carter and Bean [6], suggesting that sodium channel kinetics, in addition to potassium channel kinetics, is also different for different cell types and subcellular compartments. Numerous studies have been dedicated to study the energy constraints of the brain from the coding and network design perspective [4,11] or from the channel kinetic perspective [3,5,6,12]. Recently it has been argued that energy minimization under functional constraints could be the unifying principle governing the specific combination of ion channels that each individual neuron expresses [12]. In support of this hypothesis, it was demonstrated that some mammalian neurons generate their action potentials with currents that almost reach optimal metabolic efficiency [5]. So far, these studies have mostly addressed the question of metabolic efficiency considering isolated action potentials. Moreover, it can be difficult to compare neurons with very different properties. Here, we have introduced a new family of biophysical models able to reproduce different action potentials relevant to this debate and their underlying currents [6]. We believe that our approach is very valuable in providing mechanistic insights into the specific properties of different types of neurons. It also suggests that it could be possible to design a generic Hodgkin-Huxley-type model family that could encompass a very broad range of different observed behaviors in a similar way than the Izhikevich model does 7 for integrate-and-fire type model neurons [13]. Finally we believe that our model family will prove invaluable in studying metabolic questions and in particular in addressing the specific question: why are inhibitory neurons less metabolically efficient than excitatory neurons? Acknowledgements RJ is supported by grants from the Olga Mayenfisch Foundation and from the Hartmann M?uller Foundation. The authors would like to thank Dr Arnd Roth for helpful discussions. References [1] Hodgkin AL, Huxley AF. J Physiol 1952; 116: 449?472. [2] Destexhe A, Par?e D. J Neurophysiol 1999; 81: 1531?1547. [3] Sengupta B, Stemmler M, Laughlin SB, Niven JE. PLoS Comp. Biol. 2010; 6: e1000840. [4] Attwell D, Laughlin SB. J Cereb Blood Flow Metab 2001; 21: 1133?1145. [5] Alle H, Roth A, Geiger J. Science 2009; 325: 1405?1408. [6] Carter BC, Bean BP. Neuron 2009; 64: 898?909. [7] Jolivet R, Lewis TJ, Gerstner W. J Neurophysiol 2004; 92: 959?976. [8] Lien CC, Jonas P. J Neurosci 2003; 23: 2058?2068. [9] Rauch A, La Camera G, L?uscher HR, Senn W, Fusi S. J Neurophysiol 2003; 90: 1598?1612. [10] Alle H and Geiger J. Science 2006; 311: 1290?1293. [11] Laughlin SB, Sejnowski T. Science 2003; 301: 1870?1874. [12] Hasenstaub A, Otte S, Callaway E, Sejnowski TJ. PNAS 2010; 107: 12329?12334. [13] Izhikevich E. IEEE Trans Neural Net 2003; 14: 1569- 1572. 8
3970 |@word proceeded:1 rising:1 hippocampus:1 hyperpolarized:1 open:1 squid:3 cm2:8 pulse:4 simulation:1 initial:1 inefficiency:1 bc:1 interestingly:1 current:31 comparing:1 yet:2 dx:2 physiol:1 half:1 parameterization:2 hodgkinhuxley:2 provides:1 gx:2 magistretti:2 height:1 ik:9 jonas:1 prove:1 introduce:2 indeed:2 behavior:7 p1:2 roughly:2 brain:5 decreasing:1 solver:1 considering:1 revision:1 begin:1 project:1 underlying:2 moreover:2 panel:1 null:1 what:1 depolarization:2 ca1:3 finding:2 giant:3 every:1 charge:2 demonstrates:3 ser:23 unit:1 grant:1 overestimate:1 positive:1 aiming:1 consequence:1 severely:1 despite:1 analyzing:1 establishing:1 approximately:1 ap:1 black:2 chose:1 twice:1 therein:1 suggests:2 lausanne:3 callaway:1 range:14 unique:4 responsible:1 camera:1 area:1 significantly:2 convenient:1 word:1 psychiatric:1 suggest:1 targeting:1 close:3 silico:1 demonstrated:2 center:1 roth:2 urich:6 otte:1 formulate:1 recovery:3 correcting:1 insight:1 mossy:1 classic:2 variation:1 traditionally:1 increment:2 discharge:1 play:1 today:1 speak:1 exact:1 us:1 hypothesis:1 agreement:1 particularly:1 mammalian:13 observed:15 role:1 bottom:4 p5:4 capture:1 region:1 renaud:2 cycle:1 plo:1 counter:1 decrease:1 transmembrane:1 valuable:1 supposedly:1 leak:1 efflux:1 dynamic:15 singh:1 upon:1 distinctive:2 efficiency:6 neurophysiol:3 fiber:1 stemmler:1 train:2 fast:3 describe:2 sejnowski:2 whose:1 widely:2 highlighted:1 final:1 triggered:1 biophysical:2 net:1 interaction:1 product:1 maximal:2 p4:1 relevant:2 achieve:1 subcellular:1 description:2 potassium:16 produce:1 generating:2 spent:1 help:1 develop:1 measured:7 eq:6 p2:1 implemented:1 switzerland:5 closely:1 owing:2 bean:8 enable:1 exchange:1 premise:1 activating:1 argued:1 adjusted:1 exploring:1 kinetics:3 sufficiently:1 around:1 exp:6 equilibrium:4 alle:3 encapsulated:1 minimization:1 uller:1 clearly:1 aim:1 inactivation:4 varying:6 voltage:24 indicates:1 helpful:1 dependent:2 el:4 epfl:3 sb:3 entire:1 integrated:1 lien:1 reproduce:5 issue:1 hartmann:1 retaining:1 sengupta:1 once:1 having:3 broad:3 peaked:2 fundamentally:1 quantitatively:2 opening:3 individual:3 phase:6 fire:1 conductance:1 interneurons:4 uscher:1 introduces:1 yielding:2 activated:2 tj:2 implication:1 accurate:1 integral:1 respective:1 incomplete:3 sooner:1 plotted:1 isolated:4 theoretical:2 minimal:1 instance:2 formalism:2 modeling:3 purkinje:6 earlier:2 cover:2 hasenstaub:1 cost:3 deviation:1 entry:14 pump:1 addressing:1 optimally:1 characterize:2 reported:4 peak:13 stay:1 standing:1 na:2 central:6 slowly:1 dr:1 ek:3 inefficient:4 leading:2 return:1 account:2 potential:77 suggesting:1 sec:1 coding:1 includes:1 postulated:2 mv:8 onset:1 depends:2 reached:2 red:4 decaying:3 start:1 elicited:1 il:1 square:3 compartment:2 characteristic:1 accurately:1 critically:2 ionic:3 economical:1 notoriously:1 comp:1 cc:1 tissue:1 explain:2 reach:1 basket:1 definition:2 against:1 energy:6 colleague:1 frequency:3 associated:4 gain:3 dataset:1 subsection:1 amplitude:1 back:1 originally:4 dt:5 higher:1 follow:1 flowing:1 though:1 governing:2 correlation:3 hand:1 brings:1 indicated:2 izhikevich:2 believe:2 building:1 effect:2 evolution:2 toxicology:3 during:18 width:14 noted:1 m:15 hippocampal:1 cereb:1 dedicated:2 invaluable:1 weber:1 consideration:1 novel:10 recently:1 stimulation:1 spiking:3 hyperpolarization:1 functional:1 refractory:1 khz:1 resting:3 ena:3 atp:2 closing:1 bruno:1 recent:6 perspective:2 stiff:1 certain:2 affiliation:1 arbitrarily:1 minimum:2 period:1 ii:1 encompass:1 rj:1 pnas:1 match:2 faster:2 af:1 divided:1 bigger:1 prediction:4 regression:1 xsi:5 natick:1 cerebellar:2 sometimes:1 achieved:1 cell:11 ion:8 addition:1 remarkably:1 addressed:1 pyramidal:8 extra:2 depolarized:1 exhibited:1 anand:1 flow:1 call:1 kick:2 iii:1 destexhe:1 fit:1 opposite:1 economic:1 idea:2 lesser:1 regarding:1 shift:1 expression:2 rauch:1 metab:1 action:67 matlab:1 useful:1 latency:1 aimed:1 carter:8 reduced:1 generate:3 inhibitory:2 dotted:1 senn:1 neuroscience:1 estimated:1 correctly:2 per:1 blue:2 express:1 key:2 nevertheless:1 threshold:2 falling:2 blood:1 year:1 parameterized:3 hodgkin:26 family:23 almost:2 p3:1 geiger:2 fusi:1 cyan:2 display:2 adapted:1 infinity:1 huxley:26 constraint:2 bp:1 influx:2 argument:1 expanded:1 relatively:1 combination:1 poor:1 membrane:18 remain:1 n4:2 making:1 dv:3 restricted:1 equation:5 discus:2 hh:6 mathworks:1 mind:1 mechanistic:1 subjected:1 end:2 reversal:2 studying:1 available:1 observe:2 generic:2 pierre:2 appearing:1 original:6 abandoned:1 top:2 unifying:1 widest:1 granule:1 classical:1 contact:1 capacitance:2 iapp:5 question:3 spike:3 restored:1 strategy:2 concentration:1 exhibit:3 gradient:1 thank:1 originate:1 collected:1 uncharacterized:1 trivial:1 gna:3 reformulate:2 ratio:7 providing:2 difficult:1 mostly:1 debate:1 gk:3 negative:1 numerics:2 design:3 gated:8 neuron:45 observation:7 communication:1 reproducing:2 sharp:5 introduced:5 required:1 metabolically:8 optimized:2 narrow:4 established:2 jolivet:3 trans:1 able:4 bar:1 green:2 rely:2 hr:1 sodium:40 brief:1 numerous:1 gabaergic:1 started:1 speeding:1 acknowledgement:1 precedent:1 asymptotic:1 relative:1 ina:7 par:1 generation:4 interesting:1 versus:1 remarkable:1 foundation:3 integrate:1 uzh:2 principle:1 metabolic:9 course:1 summary:1 excitatory:1 gl:4 last:1 supported:1 formal:1 allow:1 laughlin:3 institute:4 wide:4 characterizing:1 regard:1 boundary:1 curve:3 cortical:10 stand:2 author:2 qualitatively:1 iext:4 far:1 reproduces:1 arnd:1 alternatively:1 why:1 table:3 stimulated:1 channel:27 transported:1 molecule:2 gerstner:1 neurosci:1 arrow:1 whole:2 profile:1 pharmacology:3 je:1 axon:3 fails:1 position:1 governed:1 third:1 externally:2 specific:7 boutons:1 gating:7 showing:1 r2:1 faint:1 a3:1 budget:2 suited:2 led:1 simply:1 explore:1 repolarize:1 expressed:1 saturating:1 ch:4 determines:1 lewis:1 extracted:1 ma:1 stimulating:1 kinetic:1 formulated:2 towards:1 change:5 experimentally:3 specifically:4 typical:1 reducing:1 olga:1 attwell:1 called:1 total:1 secondary:5 experimental:11 la:1 m3:2 meaningful:2 rarely:1 exception:1 support:2 latter:1 biol:1 ex:3
3,280
3,971
A POMDP Extension with Belief-dependent Rewards Mauricio Araya-L?opez Olivier Buffet Vincent Thomas Franc?ois Charpillet Nancy Universit?e / INRIA LORIA ? Campus Scientifique ? BP 239 54506 Vandoeuvre-l`es-Nancy Cedex ? France [email protected] Abstract Partially Observable Markov Decision Processes (POMDPs) model sequential decision-making problems under uncertainty and partial observability. Unfortunately, some problems cannot be modeled with state-dependent reward functions, e.g., problems whose objective explicitly implies reducing the uncertainty on the state. To that end, we introduce ?POMDPs, an extension of POMDPs where the reward function ? depends on the belief state. We show that, under the common assumption that ? is convex, the value function is also convex, what makes it possible to (1) approximate ? arbitrarily well with a piecewise linear and convex (PWLC) function, and (2) use state-of-the-art exact or approximate solving algorithms with limited changes. 1 Introduction Sequential decision-making problems under uncertainty and partial observability are typically modeled using Partially Observable Markov Decision Processes (POMDPs) [1], where the objective is to decide how to act so that the sequence of visited states optimizes some performance criterion. However, this formalism is not expressive enough to model problems with any kind of objective functions. Let us consider active sensing problems, where the objective is to act so as to acquire knowledge about certain state variables. Medical diagnosis for example is about asking the good questions and performing the appropriate exams so as to diagnose a patient at a low cost and with high certainty. This can be formalized as a POMDP by rewarding?if successful?a final action consisting in expressing the diagnoser?s ?best guess?. Actually, a large body of work formalizes active sensing with POMDPs [2, 3, 4]. An issue is that, in some problems, the objective needs to be directly expressed in terms of the uncertainty/information on the state, e.g., to minimize the entropy over a given state variable. In such cases, POMDPs are not appropriate because the reward function depends on the state and the action, not on the knowledge of the agent. Instead, we need a model where the instant reward depends on the current belief state. The belief MDP formalism provides the needed expressiveness for these problems. Yet, there is not much research on specific algorithms to solve them, so they are usually forced to fit in the POMDP framework, which means changing the original problem definition. One can argue that acquiring information is always a means, not an end, and thus, a ?well-defined? sequential-decision making problem with partial observability must always be modeled as a normal POMDP. However, in a number of cases the problem designer has decided to separate the task of looking for information from that of exploiting information. Let us mention two examples: (i) the 1 surveillance [5] and (ii) the exploration [2] of a given area, in both cases when one does not know what to expect from these tasks?and thus how to react to the discoveries. After reviewing some background knowledge on POMDPs in Section 2, Section 3 introduces ?POMDPs?an extension of POMDPs where the reward is a (typically convex) function of the belief state?and proves that the convexity of the value function is preserved. Then we show how classical solving algorithms can be adapted depending whether the reward function is piecewise linear (Sec. 3.3) or not (Sec. 4). 2 Partially Observable MDPs The general problem that POMDPs address is for the agent to find a decision policy ? choosing, at each time step, the best action based on its past observations and actions in order to maximize its future gain (which can be measured for example through the total accumulated reward or the average reward per time step). Compared to classical deterministic planning, the agent has to face the difficulty to account for a system not only with uncertain dynamics but also whose current state is imperfectly known. 2.1 POMDP Description Formally, POMDPs are defined by a tuple hS, A, ?, T, O, r, b0 i where, at any time step, the system being in some state s ? S (the state space), the agent performs an action a ? A (the action space) that results in (1) a transition to a state s0 according to the transition function T (s, a, s0 ) = P r(s0 |s, a), (2) an observation o ? ? (the observation space) according to the observation function O(s0 , a, o) = P r(o|s0 , a), and (3) a scalar reward r(s, a). b0 is the initial probability distribution over states. Unless stated otherwise, the state, action and observation sets are finite [6]. The agent can typically reason about the state of the system by computing a belief state b ? ? = ?(S) (the set of probability distributions over S),1 using the following update formula (based on the Bayes rule) when performing action a and observing o: O(s0 , a, o) X ba,o (s0 ) = T (s, a, s0 )b(s), P r(o|a, b) s?S P 00 00 where P r(o|a, b) = s,s00 ?S O(s , a, o)T (s, a, s )b(s). Using belief states, a POMDP can be rewritten as an MDP over the belief space, or belief MDP, h?, A, ?, ?i, where the new transition ? and reward functions ? are defined respectively over ? ? A ? ? and ? ? A. With this reformulation, a number of theoretical results about MDPs can be extended, such as the existence of a deterministic policy that is optimal. An issue is that, even if a POMDP has a finite number of states, the corresponding belief MDP is defined over a continuous?and thus infinite?belief space. In this continuous MDP, the objective is to maximize the cumulative reward by looking for a policy taking the current belief state as input. More formally, we are searching for a policy verifying P? ? ? = argmax??A? J ? (b0 ) where J ? (b0 ) = E [ t=0 ??t |b0 , ?], ?t being the expected immediate reward obtained at time step t, and ? a discount factor. Bellman?s principle of optimality [7] lets us ? compute the function J ? recursively through the value function ? ? Z Vn (b) = max ??(b, a) + ? ? (b, a, b0 )Vn?1 (b0 )db0 ? a?A b0 ?? " = max ?(b, a) + ? a?A X # P r(o|a, b)Vn?1 (ba,o ) , (1) o ? where, for all b ? ?, V0 (b) = 0, and J ? (b) = Vn=H (b) (where H is the?possibly infinite? horizon of the problem). The POMDP framework presents a reward function r(s, a) based on the state and action. On the other hand, the belief MDP presents a reward function ?(b, a) based on beliefs. This belief-based 1 ?(S) forms a simplex because kbk1 = 1, that is why we use ? as the set of all possible b. 2 reward function is derived as the expectation of the POMDP rewards: X ?(b, a) = b(s)r(s, a). (2) s An important consequence of Equation 2 is that the recursive computation described in Eq. 1 has the property to generate piecewise-linear and convex (PWLC) value functions for each horizon [1], i.e., each function is determined by a set of hyperplanes (each represented by a vector), the value at a given belief point being that of the highest hyperplane. For example, P if ?n is the set of vectors representing the value function for horizon n, then Vn (b) = max???n s b(s)?(s). 2.2 Solving POMDPs with Exact Updates Using the PWLC property, one can perform the Bellman update using the following factorization of Eq. 1: " # XX r(s, a) X Vn (b) = max b(s) T (s, a, s0 )O(s0 , a, o)?n?1 (ba,o , s0 ) , (3) + a?A |?| 0 o s s 2 with ?n (b) = argmax b ? ?. If we consider the term in brackets in Eq. 3, this generates |?| ? |A| ???n ?-sets, each one of size |?n?1 |. These sets are defined as  a  r a,o a,o ?n = + P ? ?n?1 ?n?1 ? ?n?1 , |?| (4) where P a,o (s, s0 ) = T (s, a, s0 )O(s0 , a, o) and ra (s) =Lr(s, a). Therefore, for obtaining an exact representation of the value function, one can compute ( being the cross-sum between two sets): [ M a,o ?n = ?n . a o a,o Yet, these ?n sets?and also the final ?n ?are non-parsimonious: some ?-vectors may be useless because the corresponding hyperplanes are below the value function. Pruning phases are then required to remove dominated vectors. There are several algorithms based on pruning techniques like Batch Enumeration [8] or more efficient algorithms such as Witness or Incremental Pruning [6]. 2.3 Solving POMDPs with Approximate Updates The value function updating processes presented above are exact and provide value functions that can be used whatever the initial belief state b0 . A number of approximate POMDP solutions have been proposed to reduce the complexity of these computations, using for example heuristic estimates of the value function, or applying the value update only on selected belief points [9]. We focus here on the latter point-based (PB) approximations, which have largely contributed to the recent progress in solving POMDPs, and whose relevant literature goes from Lovejoy?s early work [10] via Pineau et al.?s PBVI [11], Spaan and Vlassis? Perseus [12], Smith and Simmons? HSVI2 [13], through to Kurniawati et al.?s SARSOP [14]. At each iteration n until convergence, a typical PB algorithm: 1. selects a new set of belief points Bn based on Bn?1 and the current approximation Vn?1 ; 2. performs a Bellman backup at each belief point b ? Bn , resulting in one ?-vector per point; 3. prunes points whose associated hyperplanes are dominated or considered negligible. The various PB algorithms differ mainly in how belief points are selected, and in how the update is performed. Existing belief point selection methods have exploited ideas like using a regular discretization or a random sampling of the belief simplex, picking reachable points (by simulating action sequences starting from b0 ), adding points that reduce the approximation error, or looking in particular at regions relevant to the optimal policy [15]. 2 The ? function returns a vector, so ?n (b, s) = (?n (b))(s). 3 3 3.1 POMDP extension for Active Sensing Introducing ?POMDPs All problems with partial observability confront the issue of getting more information to achieve some goal. This problem is usually implicitly addressed in the resolution process, where acquiring information is only a means for optimizing an expected reward based on the system state. Some active sensing problems can be modeled this way (e.g. active classification), but not all of them. A special kind of problem is when the performance criterion incorporates an explicit measure of the agent?s knowledge about the system, which is based on the beliefs rather than states. Surveillance for example is a never-ending task that does not seem to allow for a modeling with state-dependent rewards. Indeed, if we consider the simple problem of knowing the position of a hidden object, it is possible to solve this without even having seen the object (for instance if all the locations but one have been visited). However, the reward of a POMDP cannot model this since it is only based on the current state and action. One solution would be to include the whole history in the state, leading to a combinatorial explosion. We prefer to consider a new way of defining rewards based on the acquired knowledge represented by belief states. The rest of the paper explores the fact that belief MDPs can be used outside the specific definition of ?(b, a) in Eq. 2, and therefore discusses how to solve this special type of active sensing problems. As Eq. 2 is no longer valid, the direct link with POMDPs is broken. We can however still use all the other components of POMDPs such as states, observations, etc. A way of fixing this is to generalize the POMDP framework to a ?-based POMDP (?POMDP), where the reward is not defined as a function r(s, a), but directly as a function ?(b, a). The nature of the ?(b, a) function depends on the problem, but is usually related to some uncertainty or error measure [3, 2, 4]. Most common methods are those based on Shannon?s information theory, in particular Shannon?s entropy or the Kullback-Leibler distance [16]. In order to present these functions as rewards, they have to measure information rather than uncertainty, so the negative entropy function ?ent (b) = log2 (|S|) + P b(s) log2 (b(s))?which is maximal in the corners of the simplex and minimal in the center? s?S is used rather than Shannon?s original entropy. Also, other simpler functions based on the same idea can be used, such as the distance from the simplex center (DSC), ?dsc (b) = kb ? ckm , where c is the center of the simplex and m a positive integer that denotes the order of the metric space. Please note that ?(b, a) is not restricted to be only an uncertainty measurement, but can be a combination of the expected state-action rewards?as in Eq. 2?and an uncertainty or error measurement. For example, Mihaylova et al.?s work [3] defines the active sensing problem as optimizing a weighted sum of uncertainty measurements and costs, where the former depends on the belief and the latter on the system state. In the remainder of this paper, we show how to apply classical POMDP algorithms to ?POMDPs. To that end, we discuss the convexity of the value function, which permits extending these algorithms using PWLC approximations. 3.2 Convexity Property An important property used to solve normal POMDPs is the result that a belief-based value function is convex, because r(s, a) is linear with respect to the belief, and the expectation, sum and max operators preserve this property [1]. For ?POMDPs, this property also holds if the reward function ?(b, a) is convex, as shown in Theorem 3.1. Theorem 3.1. If ? and V0 are convex functions over ?, then the value function Vn of the belief MDP is convex over ? at any time step n. [Proof in [17, Appendix]] This last theorem is based on ?(b, a) being a convex function over b, which is a natural property for uncertainty (or information) measures, because the objective is to avoid belief distributions that do not give much information on which state the system is in, and to assign higher rewards to those beliefs that give higher probabilities of being in a specific state. Thus, a reward function meant to reduce the uncertainty must provide high payloads near the corners of the simplex, and low payloads near its center. For that reason, we will focus only on reward functions that comply with convexity in the rest of the paper. The initial value function V0 might be any convex function for infinite-horizon problems, but by 4 definition V0 = 0 for finite-horizon problems. We will use the latter case for the rest of the paper, to provide fairly general results for both kinds of problems. Plus, starting with V0 = 0, it is also easy to prove by induction that, if ? is continuous (respectively differentiable), then Vn is continuous (respectively piecewise differentiable). 3.3 Piecewise Linear Reward Functions This section focuses on the case where ? is a PWLC function and shows that only a small adaptation of the exact and approximate updates in the POMDP case is necessary to compute the optimal value function. The complex case where ? is not PWLC is left for Sec. 4. 3.3.1 Exact Updates From now on, ?(b, a), being a PWLC function, can be represented as several ?-sets, one ?a? for each a. The reward is computed as: " # X ?(b, a) = maxa b(s)?(s) . ???? s Using this definition leads to the following changes in Eq. 3 # " XX X 0 0 a,o 0 a T (s, a, s )O(s , a, o)?n?1 (b , s ) , b(s) ?? (b, s) + Vn (b) = max a?A where ?a? (b, s) s s0 o = argmax(b ? ?). This uses the ?-set ?a? and generates |?| ? |A| ?-sets: ???a ? ?n a,o = {P a,o ? ?n?1 | ?n?1 ? ?n?1 }, where P a,o (s, s0 ) = T (s, a, s0 )O(s0 , a, o). Exact algorithms like Value Iteration or Incremental Pruning can then be applied to this POMDP extension in a similar way as for POMDPs. The difference is that the cross-sum includes not only a,o one ?a,o for each observation ?-set ?n , but also one ?? from the ?-set ?a? corresponding to the reward: " # [ M a,o a ?n = ?n ? ?? . a o Thus, the cross-sum generates |R| times more vectors than with a classic POMDP, |R| being the number of ?-vectors specifying the ?(b, a) function3 . 3.3.2 Approximate Updates Point-based approximations can be applied in the same way as PBVI or SARSOP do to the original POMDP update. The only difference is again the reward function representation as an envelope of hyperplanes. PB algorithms select the hyperplane that maximizes the value function at each belief point, so the same simplification can be applied to the set ?a? . 4 Generalizing to Other Reward Functions Uncertainty measurements such as the negative entropy or the DSC (with m > 1 and m 6= ?) are not piecewise linear functions. In theory, each step of value iteration can be analytically computed using these functions, but the expressions are not closed as in the linear case, growing in complexity and making them unmanageable after a few steps. Moreover, pruning techniques cannot be applied directly to the resulting hypersurfaces, and even second order measures do not exhibit standard quadratic forms to apply quadratic programming. However, convex functions can be efficiently approximated by piecewise linear functions, making it possible to apply the techniques described in Section 3.3 with a bounded error, as long as the approximation of ? is bounded. 3 More precisely, the number |R| depends on the considered action. 5 Approximating ? 4.1 Consider a continuous, convex and piecewise differentiable reward function ?(b),4 and an arbitrary (and finite) set of points B ? ? where the gradient is well defined. A lower PWLC approximation of ?(b) can be obtained by using each element b0 ? B as a base point for constructing a tangent hyperplane which is always a lower bound of ?(b). Concretely, ?b0 (b) = ?(b0 ) + (b ? b0 ) ? ??(b0 ) is the linear function that represents the tangent hyperplane. Then, the approximation of ?(b) using a set B is defined as ?B (b) = maxb0 (?b0 (b)). At any point b ? ? the error of the approximation can be written as B (b) = |?(b) ? ?B (b)|, (5) and if we specifically pick b as the point where B (b) is maximal (worst error), then we can try to bound this error depending on the nature of ?. It is well known that a piecewise linear approximation of a Lipschitz function is bounded because the gradient ??(b0 ) that is used to construct the hyperplane ?b0 (b) has bounded norm [18]. Unfortunately, the negative entropy is not Lipschitz (f (x) = x log2 (x) has an infinite slope when x ? 0), so this result is not generic enough to cover a wide range of active sensing problems. Yet, under certain mild assumptions a proper error bound can still be found. The aim of the rest of this section is to find an error bound in three steps. First, we will introduce some basic results over the simplex and the convexity of ?. Informally, Lemma 4.1 will show that, for each b, it is possible to find a belief point in B far enough from the boundary of the simplex but within a bounded distance to b. Then, in a second step, we will assume the function ?(b) verifies the ?-H?older condition to be able to bound the norm of the gradient in Lemma 4.2. In the end, Theorem 4.3 will use both lemmas to bound the error of ??s approximation under these assumptions. ? ?? b? b b? ? ?? Figure 1: Simplices ? and ?? , and the points b, b0 and b00 . For each point b ? ?, it is possible to associate a point b? = argmaxx?B ?x (b) corresponding to the point in B whose tangent hyperplane gives the best approximation of ? at b. Consider the point b ? ? where B (b) is maximum: this error can be easily computed using the gradient ??(b? ). Unfortunately, some partial derivatives of ? may diverge to infinity on the boundary of the simplex in the non-Lipschitz case, making the error hard to analyze. Therefore, to ensure that this error can be bounded, instead of b? , we will take a safe b00 ? B (far enough fromP the boundary) by using an intermediate point b0 in an inner simplex ?? , where ?? = {b ? [?, 1]N | i bi = 1} with N = |S|. Thus, for a given b ? ? and ? ? (0, N1 ], we define the point b0 = argminx??? kx ? bk1 as the closest point to b in ?? and b00 = argminx?B kx ? b0 k1 as the closest point to b0 in B (see Figure 1). These two points will be used to find an upper bound for the distance kb ? b00 k1 based on the density of B, defined as ?B = min max kb ? b0 k1 . 0 b?? b ?B Lemma 4.1. The distance (1-norm) between the maximum error point b ? ? and the selected b00 ? B is bounded by kb ? b00 k1 ? 2(N ? 1)? + ?B . [Proof in [17, Appendix]] If we pick ? > ?B , then we are sure that b00 is not on the boundary of the simplex ?, with a minimum distance from the boundary of ? = ? ? ?B . This will allow finding bounds for the PWLC 4 For convenience?and without loss of generality?we only consider the case where ?(b, a) = ?(b). 6 approximation of convex ?-H?older functions, which is a broader family of functions including the negative entropy, convex Lipschitz functions and others. The ?-H?older condition is a generalization of the Lipschitz condition. In our setting it means, for a function f : D 7? R with D ? Rn , that it complies with ?? ? (0, 1], ?K? > 0, s.t. |f (x) ? f (y)| ? K? kx ? yk? 1. The limit case, where a convex ?-H?older function has infinite-valued norm for the gradient, is always on the boundary of the simplex ? (due to the convexity), and therefore the point b00 will be free of this predicament because of ?. More precisely, an ?-H?older function in ? with constant K? in 1-norm complies with the Lipschitz condition on ?? with a constant K? ? ? (see [17, Appendix]). Moreover, the norm of the gradient k?f (b00 )k1 is also bounded as stated by Lemma 4.2. Lemma 4.2. Let ? > 0 and f be an ?-H?older (with constant K? ), bounded and convex function from ? to R, f being differentiable everywhere in ?o (the interior of ?). Then, for all b ? ?? , k?f (b)k1 ? K? ? ??1 . [Proof in [17, Appendix]] Under these conditions, we can show that the PWLC approximation is bounded. Theorem 4.3. Let ? be a continuous and convex function over ?, differentiable everywhere in ?o (the interior of ?), and satisfying the ?-H?older condition with constant K? . The error of an approximation ?B can be bounded by C?b? , where C is a scalar constant. [Proof in [17, Appendix]] 4.2 Exact Updates Knowing that the approximation of ? is bounded for a wide family of functions, the techniques described in Sec. 3.3.1 can be directly applied using ?B (b) as the PWLC reward function. These algorithms can be safely used because the propagation of the error due to exact updates is bounded. This can be proven using a similar methodology as in [11, 10]. Let Vt be the value function using the PWLC approximation described above and Vt? the optimal value function both at time t, H being ? the same operator with the PWLC approximation. Then, the error the exact update operator and H from the real value function is ? t?1 ? HV ? k? kVt ? Vt? k? = kHV t?1 ? ? ? kHVt?1 ? HVt?1 k? + kHVt?1 ? HVt?1 k? (By definition) (By triangular inequality) ? ? |?b? + ?b? ? b ? ?(b) ? ?b? ? b| + kHVt?1 ? HVt?1 k? ? ? ? ? C?B + ? C?B + ? C?B ? kHVt?1 ? HVt?1 k? ? ?kVt?1 ? Vt?1 k (Maximum error at b) (By Theorem 4.3) (By contraction) (By sum of a geometric series) 1?? For these algorithms, the selection of the set B remains open, raising similar issues as the selection of belief points in PB algorithms. 4.3 Approximate Updates In the case of PB algorithms, the extension is also straightforward, and the algorithms described in Sec. 3.3.2 can be used with a bounded error. The selection of B, the set of points for the PWLC approximation, and the set of points for the algorithm, can be shared5 . This simplifies the study of the bound when using both approximation techniques at the same time. Let V?t be the value function at time t calculated using the PWLC approximation and a PB algorithm. Then the error between V?t and Vt? is kV?t ? Vt? k? ? kV?t ? Vt k? + kVt ? Vt? k? . The second term is the same as in Sec. 4.2, C? ? B so it is bounded by 1?? . The first term can be bounded by the same reasoning as in [11], where ? (Rmax ?Rmin +C?B )?B ? kVt ? Vt k? = , with Rmin and Rmax the minimum and maximum values for 1?? 5 Points from ??s boundary can be removed where the gradient is not defined, as the proofs only rely on interior points. 7 ? ?(b) respectively. This is because the worst case for an ? vector is Rmin 1?? , meanwhile the best case max is only R1?? because the approximation is always a lower bound. 5 Conclusions We have introduced ?POMDPs, an extension of POMDPs that allows for expressing sequential decision-making problems where reducing the uncertainty on some state variables is an explicit objective. In this model, the reward ? is typically a convex function of the belief state. Using the convexity of ?, a first important result that we prove is that a Bellman backup Vn = HVn?1 preserves convexity. In particular, if ? is PWLC and the value function V0 is equal to 0, then Vn is also PWLC and it is straightforward to adapt many state-of-the-art POMDP algorithms. Yet, if ? is not PWLC, performing exact updates is much more complex. We therefore propose employing PWLC approximations of the convex reward function at hand to come back to a simple case, and show that the resulting algorithms converge to the optimal value function in the limit. Previous work has already introduced belief-dependent rewards, such as Spaan?s discussion about POMDPs and Active Perception [19], or Hero et al.?s work in sensor management using POMDPs [5]. Yet, the first one only presents the problem of non-PWLC value functions without giving a specific solution, meanwhile the second solves the problem using Monte-Carlo techniques that do not rely on the PWLC property. In the robotics field, uncertainty measurements within POMDPs have been widely used as heuristics [2], with very good results but no convergence guarantees. These techniques use only state-dependent rewards, but uncertainty measurements are employed to speed up the solving process, at the cost of losing some basic properties (e.g. Markovian property). Our work paves the way for solving problems with belief-dependent rewards, using new algorithms approximating the value function (e.g. point-based ones) in a theoretically sound manner. An important point is that the time complexity of the new algorithms only changes due to the size of the approximation of ?. Future work includes conducting experiments to measure the increase in complexity. A more complex task is to evaluate the quality of the resulting approximations due to the lack of other algorithms for ?POMDPs. An option is to look at online Monte-Carlo algorithms [20] as they should require little changes. Acknowledgements This research was supported by the CONICYT-Embassade de France doctoral grant and the COMAC project. We would also like to thank Bruno Scherrer for the insightful discussions and the anonymous reviewers for their helpful comments and suggestions. References [1] R. Smallwood and E. Sondik. The optimal control of partially observable Markov decision processes over a finite horizon. Operation Research, 21:1071?1088, 1973. [2] S. Thrun. Probabilistic algorithms in robotics. AI Magazine, 21(4):93?109, 2000. [3] L. Mihaylova, T. Lefebvre, H. Bruyninckx, K. Gadeyne, and J. De Schutter. Active sensing for robotics - a survey. In Proc. 5th Intl. Conf. On Numerical Methods and Applications, 2002. [4] S. Ji and L. Carin. Cost-sensitive feature acquisition and classification. Pattern Recogn., 40(5):1474?1485, 2007. [5] A. Hero, D. Castan, D. Cochran, and K. Kastella. Foundations and Applications of Sensor Management. Springer Publishing Company, Incorporated, 2007. [6] A. Cassandra. Exact and approximate algorithms for partially observable Markov decision processes. PhD thesis, Providence, RI, USA, 1998. [7] R. Bellman. The theory of dynamic programming. Bull. Amer. Math. Soc., 60:503?516, 1954. [8] G. Monahan. A survey of partially observable Markov decision processes. Management Science, 28:1?16, 1982. 8 [9] M. Hauskrecht. Value-function approximations for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 13:33?94. [10] W. Lovejoy. Computationally feasible bounds for partially observed Markov decision processes. Operations Research, 39(1):162?175. [11] J. Pineau, G. Gordon, and S. Thrun. Anytime point-based approximations for large POMDPs. Journal of Artificial Intelligence Research (JAIR), 27:335?380, 2006. [12] M. Spaan and N. Vlassis. Perseus: Randomized point-based value iteration for POMDPs. Journal of Artificial Intelligence Research, 24:195?220, 2005. [13] T. Smith and R. Simmons. Point-based POMDP algorithms: Improved analysis and implementation. In Proc. of the Int. Conf. on Uncertainty in Artificial Intelligence (UAI), 2005. [14] H. Kurniawati, D. Hsu, and W. Lee. SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Robotics: Science and Systems IV, 2008. [15] R. Kaplow. Point-based POMDP solvers: Survey and comparative analysis. Master?s thesis, Montreal, Quebec, Canada, 2010. [16] T. Cover and J. Thomas. Elements of Information Theory. Wiley-Interscience, 1991. [17] M. Araya-L?opez, O. Buffet, V. Thomas, and F. Charpillet. A POMDP extension with beliefdependent rewards ? extended version. Technical Report RR-7433, INRIA, Oct 2010. (See also NIPS supplementary material). [18] R. Saigal. On piecewise linear approximations to smooth mappings. Mathematics of Operations Research, 4(2):153?161, 1979. [19] M. Spaan. Cooperative active perception using POMDPs. In AAAI 2008 Workshop on Advancements in POMDP Solvers, July 2008. [20] S. Ross, J. Pineau, S. Paquet, and B. Chaib-draa. Online planning algorithms for POMDPs. Journal of Artificial Intelligence Research (JAIR), 32:663?704, 2008. 9
3971 |@word h:1 mild:1 version:1 norm:6 open:1 bn:3 contraction:1 pick:2 mention:1 recursively:1 initial:3 series:1 past:1 existing:1 current:5 discretization:1 yet:5 must:2 written:1 numerical:1 hsvi2:1 remove:1 update:15 intelligence:5 selected:3 guess:1 advancement:1 smith:2 lr:1 provides:1 math:1 location:1 hyperplanes:4 simpler:1 direct:1 prove:2 interscience:1 manner:1 introduce:2 theoretically:1 acquired:1 ra:1 indeed:1 expected:3 planning:3 growing:1 bellman:5 company:1 little:1 enumeration:1 predicament:1 solver:2 project:1 xx:2 campus:1 moreover:2 maximizes:1 bounded:16 what:2 kind:3 rmax:2 perseus:2 maxa:1 finding:1 hauskrecht:1 formalizes:1 certainty:1 safely:1 guarantee:1 act:2 universit:1 whatever:1 control:1 medical:1 grant:1 mauricio:1 positive:1 negligible:1 limit:2 consequence:1 inria:2 might:1 plus:1 doctoral:1 specifying:1 limited:1 factorization:1 range:1 bi:1 decided:1 recursive:1 area:1 b00:9 regular:1 cannot:3 convenience:1 selection:4 operator:3 interior:3 applying:1 deterministic:2 reviewer:1 center:4 go:1 straightforward:2 starting:2 convex:20 pomdp:26 resolution:1 survey:3 formalized:1 react:1 rule:1 smallwood:1 classic:1 searching:1 simmons:2 magazine:1 exact:12 olivier:1 programming:2 us:1 losing:1 associate:1 element:2 approximated:1 satisfying:1 updating:1 cooperative:1 ckm:1 observed:1 verifying:1 worst:2 hv:1 region:1 highest:1 removed:1 yk:1 convexity:8 complexity:4 broken:1 reward:41 dynamic:2 solving:7 reviewing:1 easily:1 represented:3 various:1 recogn:1 forced:1 monte:2 artificial:5 choosing:1 outside:1 whose:5 heuristic:2 widely:1 solve:4 valued:1 supplementary:1 otherwise:1 triangular:1 paquet:1 final:2 online:2 sequence:2 differentiable:5 rr:1 propose:1 maximal:2 fr:1 remainder:1 adaptation:1 relevant:2 pbvi:2 achieve:1 description:1 kv:2 getting:1 ent:1 exploiting:1 convergence:2 extending:1 r1:1 intl:1 comparative:1 incremental:2 object:2 depending:2 exam:1 montreal:1 fixing:1 measured:1 b0:24 progress:1 eq:7 solves:1 soc:1 ois:1 implies:1 payload:2 come:1 differ:1 safe:1 kb:4 exploration:1 material:1 require:1 assign:1 generalization:1 anonymous:1 extension:8 kurniawati:2 hold:1 considered:2 normal:2 mapping:1 kvt:4 early:1 proc:2 combinatorial:1 visited:2 ross:1 sensitive:1 weighted:1 sensor:2 always:5 aim:1 rather:3 avoid:1 surveillance:2 broader:1 derived:1 focus:3 mainly:1 helpful:1 dependent:6 lovejoy:2 accumulated:1 hvt:4 typically:4 hidden:1 france:2 selects:1 issue:4 classification:2 scherrer:1 art:2 special:2 fairly:1 equal:1 construct:1 never:1 having:1 field:1 sampling:1 represents:1 look:1 carin:1 future:2 simplex:12 report:1 others:1 piecewise:10 gordon:1 few:1 franc:1 preserve:2 argmax:3 consisting:1 phase:1 argminx:2 n1:1 function3:1 introduces:1 bracket:1 tuple:1 partial:5 explosion:1 necessary:1 unless:1 draa:1 iv:1 theoretical:1 minimal:1 uncertain:1 instance:1 formalism:2 modeling:1 asking:1 markovian:1 cover:2 bull:1 cost:4 introducing:1 imperfectly:1 successful:1 optimally:1 providence:1 density:1 explores:1 randomized:1 probabilistic:1 rewarding:1 lee:1 picking:1 diverge:1 thesis:2 aaai:1 again:1 s00:1 management:3 possibly:1 scientifique:1 corner:2 conf:2 derivative:1 leading:1 return:1 account:1 de:2 sec:6 includes:2 int:1 explicitly:1 depends:6 performed:1 try:1 sondik:1 diagnose:1 closed:1 observing:1 analyze:1 bayes:1 option:1 slope:1 cochran:1 minimize:1 largely:1 efficiently:1 conducting:1 generalize:1 vincent:1 carlo:2 pomdps:31 bk1:1 history:1 opez:2 definition:5 acquisition:1 charpillet:2 associated:1 proof:5 gain:1 hsu:1 chaib:1 nancy:2 knowledge:5 anytime:1 actually:1 back:1 higher:2 jair:2 methodology:1 improved:1 amer:1 sarsop:3 generality:1 until:1 hand:2 expressive:1 propagation:1 lack:1 defines:1 pineau:3 quality:1 mdp:7 usa:1 former:1 analytically:1 leibler:1 lastname:1 please:1 criterion:2 performs:2 reasoning:1 common:2 ji:1 dsc:3 expressing:2 measurement:6 ai:1 mathematics:1 bruno:1 reachable:2 longer:1 v0:6 etc:1 loria:2 base:1 closest:2 recent:1 optimizing:2 optimizes:1 certain:2 inequality:1 arbitrarily:1 conicyt:1 vt:9 exploited:1 seen:1 minimum:2 pwlc:21 prune:1 employed:1 converge:1 maximize:2 july:1 ii:1 sound:1 smooth:1 technical:1 adapt:1 cross:3 long:1 basic:2 patient:1 expectation:2 confront:1 metric:1 iteration:4 robotics:4 preserved:1 background:1 addressed:1 envelope:1 rest:4 sure:1 cedex:1 comment:1 quebec:1 incorporates:1 seem:1 integer:1 near:2 intermediate:1 enough:4 easy:1 fit:1 observability:4 reduce:3 idea:2 knowing:2 inner:1 simplifies:1 whether:1 expression:1 action:13 informally:1 discount:1 generate:1 designer:1 per:2 diagnosis:1 reformulation:1 pb:7 changing:1 sum:6 everywhere:2 uncertainty:16 master:1 family:2 decide:1 vn:12 parsimonious:1 decision:12 prefer:1 appendix:5 bound:11 simplification:1 quadratic:2 adapted:1 precisely:2 infinity:1 rmin:3 bp:1 ri:1 dominated:2 generates:3 speed:1 optimality:1 min:1 performing:3 according:2 combination:1 lefebvre:1 spaan:4 making:7 restricted:1 computationally:1 equation:1 remains:1 discus:2 needed:1 know:1 hero:2 complies:2 end:4 operation:3 rewritten:1 permit:1 apply:3 appropriate:2 generic:1 simulating:1 batch:1 buffet:2 existence:1 thomas:3 original:3 denotes:1 include:1 ensure:1 publishing:1 log2:3 instant:1 giving:1 k1:6 prof:1 approximating:3 classical:3 objective:8 question:1 already:1 pave:1 exhibit:1 gradient:7 distance:6 separate:1 kbk1:1 link:1 thank:1 thrun:2 argue:1 reason:2 induction:1 modeled:4 useless:1 acquire:1 unfortunately:3 stated:2 negative:4 ba:3 implementation:1 proper:1 policy:5 perform:1 contributed:1 upper:1 observation:7 markov:7 finite:5 immediate:1 defining:1 extended:2 looking:3 witness:1 vlassis:2 incorporated:1 rn:1 arbitrary:1 expressiveness:1 canada:1 introduced:2 required:1 raising:1 nip:1 address:1 able:1 usually:3 below:1 firstname:1 perception:2 pattern:1 max:8 including:1 belief:39 difficulty:1 natural:1 fromp:1 rely:2 representing:1 older:7 mdps:3 comply:1 literature:1 discovery:1 tangent:3 geometric:1 acknowledgement:1 loss:1 araya:2 expect:1 suggestion:1 vandoeuvre:1 proven:1 foundation:1 agent:6 s0:18 saigal:1 principle:1 supported:1 last:1 free:1 allow:2 wide:2 face:1 taking:1 unmanageable:1 boundary:7 calculated:1 transition:3 cumulative:1 ending:1 valid:1 concretely:1 monahan:1 far:2 employing:1 approximate:8 observable:7 pruning:5 implicitly:1 kullback:1 schutter:1 active:11 uai:1 continuous:6 why:1 nature:2 obtaining:1 argmaxx:1 complex:3 meanwhile:2 constructing:1 backup:2 whole:1 verifies:1 body:1 simplices:1 wiley:1 position:1 explicit:2 formula:1 theorem:6 specific:4 insightful:1 sensing:8 workshop:1 sequential:4 adding:1 phd:1 horizon:6 kx:3 cassandra:1 entropy:7 generalizing:1 hvn:1 expressed:1 partially:8 scalar:2 acquiring:2 hypersurfaces:1 springer:1 oct:1 goal:1 lipschitz:6 feasible:1 change:4 hard:1 infinite:5 determined:1 reducing:2 typical:1 hyperplane:6 specifically:1 lemma:6 total:1 e:1 shannon:3 formally:2 select:1 latter:3 meant:1 evaluate:1
3,281
3,972
Random Walk Approach to Regret Minimization Hariharan Narayanan MIT Cambridge, MA 02139 [email protected] Alexander Rakhlin University of Pennsylvania Philadelphia, PA 19104 [email protected] Abstract We propose a computationally efficient random walk on a convex body which rapidly mixes to a time-varying Gibbs distribution. In the setting of online convex optimization and repeated games, the algorithm yields low regret and presents a novel efficient method for implementing mixture forecasting strategies. 1 Introduction This paper brings together two topics: online convex optimization and sampling from logconcave distributions over convex bodies. Online convex optimization has been a recent focus of research [30, 25], for it presents an abstraction that unifies and generalizes a number of existing results in online learning. Techniques from the theory of optimization (in particular, Fenchel and minimax duality) have proven to be key for understanding the rates of growth of regret [25, 1]. Deterministic regularization methods [3, 25] have emerged as natural black-box algorithms for regret minimization, and the choice of the regularization function turned out to play a pivotal role in limited-feedback problems [3]. In particular, the authors of [3] demonstrated the role of self-concordant regularization functions and the Dikin ellipsoid for minimizing regret. The latter gives a handle on the local geometry of the convex set, crucial for linear optimization with limited feedback. Random walks in a convex body gained much attention following the breakthrough paper of Dyer, Frieze and Kannan [9], who exhibited a polynomial time randomized algorithm for estimating the volume of a convex body. It is known that the problem of computing this volume by a deterministic algorithm is #P-hard. Over the two decades following [9], the polynomial dependence of volume computation on the dimension n has been drastically decreased from O? (n23 ) to O? (n4 ) [17]. The development was accomplished through the study of several geometric random walks: the Ball Walk and Hit-and-Run (see [26] for a survey). The driving force behind such results are the isoperimetric inequalities which can be extended from uniform to general logconcave distributions. In particular, computing the volume of a convex body can be seen as a special case of integration of a logconcave function, and there has been a number of major results on mixing time for sampling from logconcave distributions [17, 18]. Connections to optimization have been established in [12, 18], among others. More recently, a novel random walk, called the Dikin Walk has been proposed in [19, 13]. By exploiting the local geometry of the set, this random walk is shown to mix rapidly, and offers a number of advantages over the other random walks. While the aim of online convex optimization is different from that of sampling from logconcave distributions, the fact that the two communities recognized the importance of the Dikin ellipsoid is remarkable. In this paper we build a bridge between the two topics. We show that the problem of online convex optimization can be solved by sampling from logconcave distributions, and that the Dikin Walk can be adapted to mix rapidly to a certain time-varying distribution. In fact, it mixes fast enough that for linear cost functions only one step of the guided Dikin Walk is necessary per round of the repeated game. This is surprisingly similar to the sufficiency of one Damped Newton step of Algorithm 2 in [3], due to locally quadratic convergence ensured by the self-concordant regularizer. 1 The time-varying Gibbs distributions from which we sample are closely related to Mixture Forecasters and Bayesian Model Averaging methods (see [7, Section 11.10] as well as [29, 28, 4, 10]). To the best of our knowledge, the method presented in this paper is the first provably computationallyefficient approach to solving a class of problems which involves integrating over continuous sets of decisions. From the Bayesian point of view, our algorithm is an efficient procedure for sampling from posterior distributions, and can be used for settings outside of regret minimization. Prior work: The closest to our work is the result of [11] for Universal Portfolios. Unlike our onestep Markov chain, the algorithm of [11] works with a discretization of the probability simplex and requires a number of steps which has adverse dependence on the time horizon and accuracy. This seems unavoidable with the Grid Walk. In [2], it was shown that the Weighted Average Forecaster [15, 27] on a prohibitively large class of experts is optimal in terms of regret for a certain multitask problem, yet computationally inefficient. A Markov chain has been proposed with the required stationary distribution, but no mixing time bounds have been derived. In [8], the authors faced a similar problem whereby a near-optimal regret can be achieved by the Weighted Average Forecaster on a prohibitively large discretization of the set of decisions. Sampling from time-varying Markov chains has been investigated in the context of network dynamics [24], and has been examined from the point of view of linear stochastic approximation in reinforcement learning [14]. Beyond [11], we are not aware of any results to date where a provably rapidly mixing walk is used to solve regret minimization problems. It is worth emphasizing that without the Dikin Walk [19], the one-step mixing results of this paper seem out of reach. In particular, when sampling from exponential distributions, the known bounds for? the conductance of the Ball Walk and Hit-and-Run are not scale-independent. In order to obtain ? O( T ) regret, one has to be able to sample the target distribution with an error that is O(1/ T ). As a consequence of the deterioration of the bounds on the conductance as the scale tends to zero, the number of steps necessary per round would tend to infinity as T tends to infinity. 2 Main Results Let K ? Rn be a convex compact set and let F be a set of convex functions from K to R. Online convex optimization is defined as a repeated T -round game between the player (the algorithm) and Nature (adversary) [30, 25]. From the outset we assume that Nature is oblivious (see [7]), i.e. the individual sequence of decisions !1 , . . . , !T ? F can be fixed before the game. We are interested in randomized algorithms, and hence we consider the following online learning model: on round t, the player chooses a distribution (or, a mixed strategy) ?t?1 supported on K and ?plays? a random Xt ? ?t?1 . Nature then reveals the cost function !t ? F. The goal of the player is to control expected regret (see Lemma 1) with respect to a randomized strategy defined by a fixed distribution pU ? P for some collection of distributions P. If P contains Dirac delta distributions, the comparator term is indeed the best fixed decision x? ? K chosen in hindsight. A procedure which guarantees sublinear growth of regret for any distribution pU ? P will be called Hannan consistent with respect to P. We now state a natural procedure for updating distributions ?t which guarantees Hannan consistency for a wide range of problems. This procedure is similar to the Mixture Forecaster! used in the t prediction context [29, 28, 4, 10]. Denote the cumulative cost functions by Lt (x) = s=1 !s (x), with L0 (x) ? 0, and let ? > 0 be a learning rate. Let q0 (x) be some prior probability distribution supported on K. Define the following sequence of functions qt (x) = q0 (x) exp {??Lt (x)} , ?t ? {1, . . . , T } for every x ? K. Define the probability distribution ?t over K at time t to have density " q0 (x)e??Lt (x) d?t (x) = where Zt = qt (x)dx. dx Zt x?K (1) (2) Let D(p||q) stand for the Kullback-Leibler divergence between distributions p and q. The following lemma1 gives an equality for expected regret with respect to a fixed randomized strategy. It bears 1 Due to its simplicity, the lemma has likely appeared in the literature, yet we could not locate a reference for this form with equality and in the context of online convex optimization. The closest results appear in [28, 10], [7, p. 326] in the context of prediction, and in [4] in the context of density estimation with exponential families. 2 striking similarity to upper bounds on regret in terms of Bregman divergences for the Follow the Regularized Leader and Mirror Descent methods [23, 5], [7, Therem 11.1]. Lemma 1. Let Xt be a random variable distributed according to ?t?1 , for all t ? {1, . . . , T }, as defined in (2). Let U be a random variable with distribution pU . The expected regret is # T % T T $ $ $ E !t (Xt ) ? !t (U ) = ? ?1 (D(pU ||?0 ) ? D(pU ||?T )) + ? ?1 D(?t?1 ||?t ). t=1 t=1 t=1 Specializing to the case !(x) ? [0, 1] over K, # T % T $ $ E !t (Xt ) ? !t (U ) ? ? ?1 D(pU ||?0 ) + T ?/8. t=1 t=1 Before proceeding, let us make a few remarks. First, if the divergence between the comparator ? distribution pU and the prior ?0 is bounded, the result yields O( T ) rates of regret growth for bounded losses by choosing ? appropriately. To bound the divergence between a continuous initial ?0 and a point comparator at some x? , the analysis can be carried out in two stages: comparison to a ?small-covariance? Gaussian centered at x? , followed by an observation that the loss of the ?small-covariance? Gaussian strategy is not very different from the loss of?the deterministic strategy x? . This analysis can be found in [7, p. 326] and gives a near-optimal O( T log T ) regret bound. We also note that for linear cost functions, the notion of expected regret coincides with regret for deterministic strategies. Third, we note that if the prior is of the form q0 (x) ? exp{?R(x)} for some convex function R, then qt (x) ? exp {? (?Lt (x) + R(x))}, bearing similarity to the objective function of the Follow the Regularized Leader algorithm [23, 3]. In general, we can encode prior knowledge in q0 . For instance, if the cost functions are linear and the set K is a convex hull of N vertices (e.g. probability simplex), then the minimum loss is attained at one of the vertices, and a uniform prior on the vertices yields the Weighted Average Forecaster with the usual log N dependence [7]. Finally, we note that in online convex optimization, one of the difficulties is the issue of projections back to the set K. This issue does not arise when dealing with distributions, but instead translates into the difficulty of sampling. We find these parallels between sampling and optimization intriguing. We defer the easy proof of Lemma 1 to p. 8. Having a bound on regret, a natural question is whether there exists a computationally efficient algorithm for playing Xt according to the mixed strategy given in (2). The main result of this paper is that for linear Lipschitz cost functions the guided random walk (Algorithm 1 below) produces a sequence of points X1 , . . . , XT ? K with respective distributions ?0 , . . . , ?T ?1 such that ?i is close to ?i for all 0 ? i ? T ?1. Moreover, Xi is obtained from Xi?1 with only one random step. The step requires sampling from a Gaussian distribution with covariance given by the Hessian of the self-concordant barrier and can be implemented efficiently whenever the Hessian can be computed. The computation time exactly matches [3, Algorithm 2]: it is the same as time spent inverting a Hessian matrix, which is O(n3 ) or less. Let us now discuss our assumptions. First, the analysis of the random walk is carried out only for linear cost functions with a bounded Lipschitz constant. An analysis for general convex functions might be possible, but for the sake of brevity we restrict ourselves to the linear case. Note that convex cost functions can be linearized and a standard argument shows that regret for linearized functions can only be larger than that for the convex functions [30]. The second assumption is that q0 does not depend on T and has a bounded L2 norm with respect to the uniform distribution on K. This means that q0 can be not only uniform, but, for instance, of the form q0 (x) ? exp{?R(x)}. Theorem 2. Suppose !t : K *? [0, 1] are linear functions with Lipschitz constant 1 and the prior q0 is of bounded L2 norm with respect to uniform distribution on K. Then the one-step random walk (Algorithm 1) produces a sequence X1 , . . . , XT with distributions ?0 , . . . , ?T ?1 such that for all i, " |d?i (x) ? d?i (x)| ? C?n3 ? 2 , x?K where ?i are defined in (2), ? is the parameter of?self-concordance, and C is an absolute constant. Therefore, regret of Algorithm 1 is within O( T ) from the ideal procedure of Lemma 1. In 3 particular, by choosing ? appropriately, for an absolute constant C $ , # T % T $ $ & E !t (Xt ) ? !t (U ) ? C $ n3/2 ? T D(pU ||?0 ). t=1 (3) t=1 Proof. The statement follows directly from Lemma 1, Theorem 9, and an observation that for bounded losses " ' ' 'E?t?1 !t (Xt ) ? E?t?1 !t (Xt )' ? |!t (x)| ? |d?t?1 (x) ? d?t?1 (x)| ? C?n3 ? 2 . x?K 3 Sampling from a time-varying Gibbs distribution Sketch of the Analysis The sufficiency of only one step of the random walk is made possible by the fact that the distributions ?t?1 and ?t are close, and thus ?t?1 is a (very) warm start for ?t . The reduction in distance between the distributions after a single step is due to a general fact (Lov?asz-Simonovits [16]) which we state in Theorem 6. The majority of the work goes into lower bounding the conductance of the random walk by a quantity independent of T (Lemma 5). Since the random walk of Algorithm 1 takes advantage of the local geometry of the set, the conductance is lower bounded by (a) proving an isoperimetric inequality (Theorem 3) for the Riemannian metric (which states that the measure of the gap between two well-separated sets is large) and (b) by proving that for close-by (in the Riemannian metric) points, their transition functions are not too different (Lemma 4). Section 3 is organized as follows. In Section 3.1, the main building blocks for proving mixing time are stated, and their proofs appear later in Section 4. In Section 3.2, we use the mixing result of Section 3.1 to show that Algorithm 1 indeed closely tracks the distributions ?t (Theorem 9). 3.1 Bounding Mixing Time In the remainder of this paper, C will denote a universal constant that may change from line to line. For any function F on the interior int(K) having continuous derivatives of order k, for vectors h1 , . . . , hk ? Rn and x ? int(K), for k ? 1, we recursively define Dk?1 (x + %hk )[h1 , . . . , hk?1 ] ? Dk?1 (x)[h1 , . . . , hk?1 ] , #?0 % where D0 F (x) := F (x). Let F be a self-concordant barrier of K with a parameter ? (see [20]). For x, y ? K, ?(x, y) is the distance in the Riemannian metric whose metric tensor is the Hessian of F . Thus, the metric tensor on the tangent space at x assigns to a vector v the length -v-x := 2 D2 F (x)[v, v], and ( to a pair of vectors v, w, the inner product .v, w/x := D F (x)[v, w]. We have ?(x, y) = inf ? z -d?-z where the infimum is taken over all rectifiable paths ? from x to y. Let M be the metric space whose point set is K and metric is ?. We assume !i are linear and 1?Lipschitz with respect to ?. For x ? int(K), let Gx denote the unique Gaussian probability density function on Rn such that ) * n-x ? y-2x 1 Gx (y) ? exp ? + V (x) , where V (x) = ln det D2 F (x) and r = 1/(Cn) r2 2 Dk F (x)[h1 , . . . , hk ] := lim Further, define the scaled cumulative cost as st (y) := r2 ?Lt (y). Note that shape of Gx is precisely given by the Dikin ellipsoid, which is defined as a unit ball in - ?- x around a point x [20, 3]. The Markov chain Mt considered in this paper is such that for x, y ? K, one step x ? y is given by Algorithm 1. A simple calculation shows that the detailed balance conditions are satisfied with respect to a stationary distribution ?t (defined in Eq. (2)). Therefore the Markov chain is reversible and has this stationary measure. The next results imply that this Markov chain is rapidly mixing. The first main ingredient is an isoperimetric inequality necessary for lower bounding conductance. Theorem 3. Let S1 and S2 be measurable subsets of K and ? a probability measure supported on K that possesses a density whose logarithm is concave. Then, 1 ?((K \ S1 ) \ S2 ) ? ?(S1 , S2 )?(S1 )?(S2 ). 2(1 + 3?) 4 Algorithm 1 One Step Random Walk (Xt , st ) Input: current point Xt ? K and scaled cumulative cost st . Output: next point Xt+1 ? K Toss a fair coin. If Heads, set Xt+1 := Xt . Else, K Xt+1 Sample Z from GXt . If Z ? / K, let Xt+1 := Xt . If Z ? K, let + , t ) exp(st (Xt )) Z with prob. min 1, GGZX(X(Z) exp(st (Z)) t Xt+1 := Xt otherwise. Xt Figure 1: The new point is sam- pled from a Gaussian distribution whose shape is defined by the local metric. Dotted lines are the unit Dikin ellipsoids. The next Lemma relates the Riemannian metric ? to the Markov Chain. Intuitively, it says that for close-by points, their transition distributions cannot be far apart. r Lemma 4. If x, y ? K and ?(x, y) ? C ? , then dT V (Px , Py ) ? 1 ? C1 . n Theorem 3 and Lemma 4 together give a lower bound on conductance of the Markov Chain. Lemma 5 (Bound on Conductance). Let ? be any exponential distribution on K. The conductance ( Px (K \ S1 )d?(x) ? := inf 1 S1 ?(S1 ) ?(S1 )? 2 of the Markov Chain in Algorithm 1 is bounded below by 1? . C?n n The lower bound on conductance of Lemma 5 can now be used with the following general result on the reduction of distance between distributions. Theorem 6 (Lov?asz-Simonovits [16]). Let ?0 be the initial distribution for a lazy reversible ergodic Markov chain whose conductance is ? and stationary measure is ?, and ?k be the distribution of 0 (S) the k th step. Let M := supS ??(S) where the supremum is over all measurable subsets S of K. For .( every bounded f , let -f -2,? denote f (x)2 d?(x). For any fixed f , let Ef be the map that takes K ( ( x to K f (y)dPx (y). Then if K f (x)d?(x) = 0, ) *k ?2 k -E f -2,? ? 1 ? -f -2,? . 2 In summary, Lemma 5 provides a lower bound on conductance, while Theorem 6 ensures reduction of the norm whenever conductance is large enough. In the next section, these two are put together. We will show that reduction in the norm guarantees that the distribution after one step of the random walk (k = 1 in Theorem 6) is close to the desired distribution ?t . 3.2 Tracking the distributions ? Let {?i }i=1 be the probability measures with bounded density, supported on K, corresponding to the distribution of a point during different steps of the evolution of the algorithm. For i ? N, let - ? -?i denote the L2 norm with respect to the measure ?i . We shall write - ? -i for brevity. Hence, /( 01/2 for a measurable function f : K ? R, -f -i = K f 2 d?i . Furthermore, d?i (x) q0 (x)e??Li (x) dx Zt+1 = sup ? e2? ? 1 + ?? ??Li+1 (x) dx Z t x?K d?i+1 (x) x?K q0 (x)e sup (4) where we used the fact that !i+1 (x) ? 1 and ?? is an appropriate multiple of ?, e.g. ?? = (e2 ? 1)? does the job. Analogously, d?i+1 /d?i ? 1 + ?? over K. It then follows that the norms at time i and i + 1 are comparable: -f -i (1 + ??)?1 ? -f -i+1 ? -f -i (1 + ??) 5 (5) The mixing results of Lemma 5 together with Theorem 6 imply Corollary 7. For any i, 1 1 1 ) 1 ** ) 1 d?i+1 1 1 d?i 1 1 1 1 1 1? 1 ? 1 ? 1 ? 1 d?i 1 1 d?i 1 Cn3 ? 2 i i Corollary 7 says that ?i+1 is ?closer? than ?i to ?i by a multiplicative constant. We now show that the distance of ?i+1 to ?i+1 is (additively) not much worse than its distance to ?i . The multiplicative reduction in distance is shown to be dominating the additive increase, concluding the proof that ?i is close to ?i for all i (Theorem 9). Lemma 8. For any i, it holds that 1 1 1 1 1 1 1 d?i+1 1 2 1 ?i+1 1 1 + ??(1 + ??). 1 ? 1 ? (1 + ? ? ) ? 1 1 1 1 d?i+1 1 d?i i+1 i Proof. 1 1 1 1 1 d?i+1 1 1 d?i+1 1 1 1 1 1 ? 1 ? ? 1 1 d?i+1 1 1 d?i 1 i+1 i 1 1 1 1 1 d?i+1 1 1 d?i+1 1 1 1 1 = 1 ? 1 ? ? 1 1 d?i+1 1 1 d?i 1 i+1 i+1 1 1 1 1 1 d?i+1 1 d?i+1 1 1 1 1 1 . +1 ? 1 ? ? 1 1 d?i 1 d?i 1 1 i+1 i (6) (7) We first establish a bound of C? on (6). For any function f : K ? R, let f + (x) = max(0, f (x)) and f ? (x) = min(0, f (x)). By the triangle inequality, 1 1 1 1 1 1 1 d?i+1 1 d?i+1 1 d?i+1 1 1 d?i+1 1 1 1 1 1 1 1 ? ? ? 1 ? 1 ? . 1 d?i+1 1 1 d?i 1 1 d?i+1 d?i 1i+1 i+1 i+1 Now, using (4) and (5), 1) 1) 1 12 *+ 1 *? 1 1 d? 12 1 d? 12 1 d?i+1 1 d? d? d? 1 1 1 1 i+1 i+1 i+1 i+1 i+1 1 1 =1 ? +1 ? 1 1 1 d?i+1 ? d?i 1 1 1 1 1 d? d? d? d? i+1 i i+1 i i+1 i+1 i+1 1 1 2 312 2 312 1 d?i+1 1 1 1 d? d? d? i i 1 1 i+1 ??1 1 < 1 + ?1 ? ? 1 1 ? 1 d?i 1 1 d?i+1 i+1 d?i d?i+1 1i+1 1 1 12 12 1 1 1 1 2 1 d?i+1 1 2 2 1 d?i+1 1 = ?? 1 ? ?? (1 + ??) 1 . 1 d?i i+1 d?i 1i Thus, (6) is bounded as 1 1 1 1 1 1 1 1* ) 1 d?i+1 1 1 d?i+1 1 1 d?i+1 1 1 ?i+1 1 1 1 1 1 1 1 1 ?1 ? 11 ? ??(1 + ??) 1 = ??(1 + ??) 1 + 1 ? 11 1 d?i+1 ? 11 1 1 d?i d?i i d?i i+1 i+1 i Next, a bound on (7) follows simply by the norm comparison inequality (5): 1 1 1 1 1 1 1 d?i+1 1 1 d?i+1 1 1 d?i+1 1 1 1 1 1 1 ?1 ? 11 ? ?? 1 ? 11 1 d?i ? 11 1 . d? d? i i i+1 i i The statement follows by rearranging the terms. 1 1 1 d?0 1 Theorem 9. If 1 d? ? 1 1 < ??(1 + ??), where ?? = (e2 ? 1)?, then for all i, 0 0 1 1 1 d?i 1 1 1 ? C?n3 ? 2 . ? 1 1 d?i 1 i Consequently, for all i " x?K |d?i (x) ? d?i (x)| ? C?n3 ? 2 . 6 Proof. By Corollary 7 and Lemma 8, we see that 1 1 1 ** 1 ) ) 1 d?i+1 1 1 d?i 1 1 2 1 1 1 1 + ??(1 + ??). ? 1 ? (1 + ? ? ) 1 ? ? 1 1 d?i+1 1 1 1 3 2 Cn ? d?i i+1 Since ?? = i o( n31? 2 ), 1 1 1 ** 1 ) ) 1 d?i 1 d?i+1 1 1 1 1 1 1 1 ? 1? 1 d?i ? 11 + C?. 1 d?i+1 ? 11 3?2 Cn i+1 i (8) Let 0 ? a < 1 and b > 0, and x0 , x1 , . . . , be any sequence of non-negative numbers such that, b x0 ? b and for each i, xi+1 ? axi + b. We see, by unfolding the recurrence, that xi+1 ? 1?a . From this and (8), the first statement of the theorem follows. The second statement follows from 4" ) 51/2 1 ' 1 *2 " " ' ' d?i ' 1 d?i 1 d? i ' ' 1 |d?i ? d?i | = ' ? 1' d?i ? ? 1 d?i =1 ? 11 1 . d?i d?i d?i i 4 Proof Sketch In this section, we prove the main building blocks stated in Section 3.1. Consider a time step t. Let dT V represent total variation distance. Without loss of generality, assume x is the origin and assume st (x) = 0. For x ? K and a vector v, |v|x is defined to be sup ?. The following relation holds: x??v?K Theorem 10 (Theorem 2.3.2 (iii) [21]). Let F be a self-concordant barrier whose self-concordance parameter is ?. Then |h|x ? -h-x ? 2(1 + 3?)|h|x for all h ? Rn and x ? int(K). We term (S1 , (M \ S1 ) \ S2 , S2 ) a ?-partition of M, if ? ? dM (S1 , S2 ) := inf x?S1 ,y?S2 dM (x, y), where S1 , S2 are measurable subsets of M. Let P? be the set of all ?-partitions of M. If ? is a measure on M, the isoperimetric constant is defined as ) * r ?((M \ S1 ) \ S2 ) C(?, M, ?) := inf and Ct := C ? , M, ?t . P? ?(S1 )?(S2 ) n Given interior points x, y in int(K), suppose p, q are the ends of the chord in K containing x, y and p, x, y, q lie in that order. Denote by ?(x, y) the cross ratio |x?y||p?q| |p?x||q?y| . Let dH denote the Hilbert (projective) metric defined by dH (x, y) := ln (1 + ?(x, y)) . For two sets S1 and S2 , let ?(S1 , S2 ) := inf x?S1 ,y?S2 ?(x, y). Proof of Theorem 3. For any z on the segment xy an easy computation shows that dH (x, z) + dH (z, y) = dH (x, y). Therefore it suffices to prove the result infinitesimally. By a result due to Nesterov and Todd [22, Lemma 3.1], -x ? y-x ? -x ? y-2x ? ?(x, y) ? ? ln(1 ? -x ? y-x ). whenever -x ? y-x < 1. From (9) limy?x lim y?x ?(x,y) )x?y)x (9) = 1, and a direct computation shows that dH (x, y) ?(x, y) = lim ? 1. y?x |x ? y|x |x ? y|x Hence, using Theorem 10, the Hilbert metric and the Riemannian metric satisfy ?(x, y) ? 2(1 + 3?)dH (x, y). The statement of the theorem is now an immediate consequence of the following result due to Lov?asz and Vempala [18]: If S1 and S2 are measurable subsets of K and ? a probability measure supported on K that possesses a density whose logarithm is concave, then ?((K \ S1 ) \ S2 ) ? ?(S1 , S2 )?(S1 )?(S2 ). 7 Proof of Lemma 5. Let S1 be a measurable subset of K such that ?(S1 ) '? 12 and S2 := K \ S1 be ' its complement. Let S1$ = S1 ? {x'Px (S2 ) ? 1/C} and S2$ = S2 ? {y 'Py (S1 ) ? 1/C}. By the reversibility of the chain, which is easily checked, " " Px (S2 )d?(x) = Py (S1 )d?(y). S1 S2 If x ? S1$ and y ? S2$ then, dT V (Px , Py ) := 1 ? " min K r ? , C n ) * dPy 1 dPx (w), (w) d?(w) = 1 ? . d? d? C then dT V (Px , Py ) ? 1 ? C1 . Therefore r ?(S1$ , S2$ ) := inf ?(x, y) ? ? . " " x?S1 ,y?S2 C n Therefore Theorem 3 implies that ?(S1$ , S2$ ) r ? min(?(S1$ ), ?(S2$ )). ?((K \ S1$ ) \ S2$ ) ? min(?(S1$ ), ?(S2$ )) ? 2(1 + 3?) C? n Lemma 4 implies that if ?(x, y) ? First suppose ?(S1$ ) ? (1 ? C1 )?(S1 ) and ?(S2$ ) ? (1 ? C1 )?(S2 ). Then, " C?(S1$ ) C min(?(S1$ ), ?(S2$ )) Px (S2 )d?(x) ? ?((K \ S1$ ) \ S2$ ) ? ? C C S1 and we are done. Otherwise, without loss of generality, suppose ?(S1$ ) ? (1 ? " ?(S1 ) Px (S2 )d?(x) ? C S1 and we are done. 1 C )?(S1 ). (10) Then Proof of Lemma 1. We have that " " qt?1 Zt Zt Zt D(?t?1 ||?t ) = d?t?1 log = log + ?!t (x)d?t?1 (x) = log + ?E!t (Xt ). Z q Z Z t?1 t t?1 t?1 K K (11) Rearranging, canceling the telescoping terms, and using the fact that Z0 = 1 ?E T $ t=1 !t (Xt ) = T $ t=1 D(?t?1 ||?t ) ? log ZT . Let U be a random variable with a probability distribution pU . Then " " T $ qT (u) ? E!t (U ) = ? ?1 ??LT (u)dpU (u) = ? ?1 dpU (u) log q0 (u) K K t=1 Combining, # T % " T T $ $ $ qT (u)/ZT E !t (Xt ) ? !t (U ) = ? ?1 dpU (u) log + ? ?1 D(?t?1 ||?t ) q0 (u) K t=1 t=1 t=1 = ? ?1 (D(pU ||?0 ) ? D(pU ||?T )) + ? ?1 T $ t=1 D(?t?1 ||?t ). Now, from Eq. (11), the KL divergence can be also written as ( ??) (x) e t qt?1 (x)dx D(?t?1 ||?t ) = log K ( + ?E!t (Xt ) = log Ee??()t (Xt )?E)t (Xt )) q (x)dx K t?1 By representing the divergence in this form, one can obtain upper bounds via known methods, such as Log-Sobolev inequalities (e.g. [6]). In the simplest case of bounded loss, it is easy to show that D(?t?1 ||?t ) ? O(? 2 ), and the particular constant 1/8 can be obtained by, for instance, applying Lemma A.1 in [7]. This proves the second part of the lemma. 8 References [1] J. Abernethy, A. Agarwal, P. L. Bartlett, and A. Rakhlin. A stochastic view of optimal regret through minimax duality. In COLT ?09, 2009. [2] J. Abernethy, P. L. Bartlett, and A. Rakhlin. Multitask learning with expert advice. In Proceedings of The Twentieth Annual Conference on Learning Theory, pages 484?498, 2007. [3] J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In Proceedings of The Twenty First Annual Conference on Learning Theory, 2008. [4] K. S. Azoury and M. K. Warmuth. Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning, 43(3):211?246, June 2001. [5] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett., 31(3):167?175, 2003. [6] S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities using the entropy method. 31:1583? 1614, 2003. [7] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [8] V. Dani, T. P. Hayes, and S. Kakade. The price of bandit information for online optimization. In Advances in Neural Information Processing Systems 20. Cambridge, MA, 2008. [9] M. Dyer, A. Frieze, and R. Kannan. A random polynomial-time algorithm for approximating the volume of convex bodies. Journal of the ACM (JACM), 38(1):1?17, 1991. [10] S. Kakade and A. Ng. Online bounds for Bayesian algorithms. In Proceedings of Neural Information Processing Systems (NIPS 17), 2005. [11] A. Kalai and S. Vempala. Efficient algorithms for universal portfolios. The Journal of Machine Learning Research, 3:440, 2003. [12] A.T. Kalai and S. Vempala. Simulated annealing for convex optimization. Mathematics of Operations Research, 31(2):253?266, 2006. [13] R. Kannan and H. Narayanan. Random walks on polytopes and an affine interior point method for linear programming. In STOC, 2009. Accepted (pending revisions), Mathematics of Operations Research. [14] V. R. Konda and J. N. Tsitsiklis. Linear stochastic approximation driven by slowly varying markov chains. Systems and Control Letters, 50:95?102, 2003. [15] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212?261, 1994. [16] L. Lov?asz and M. Simonovits. Random walks in a convex body and an improved volume algorithm. Random Structures and Algorithms, 4(4):359?412, 1993. [17] L. Lov?asz and S. Vempala. Simulated annealing in convex bodies and an o? (n4 ) volume algorithm. J. Comput. Syst. Sci., 72(2):392?417, 2006. [18] L. Lov?asz and S. Vempala. The geometry of logconcave functions and sampling algorithms. Random Struct. Algorithms, 30(3):307?358, 2007. [19] H. Narayanan. Randomized interior point methods for sampling and optimization. CoRR, abs/0911.3950, 2009. [20] A.S. Nemirovskii. Interior point polynomial time methods in convex programming, 2004. [21] Y. E. Nesterov and A. S. Nemirovskii. Interior Point Polynomial Algorithms in Convex Programming. SIAM, Philadelphia, 1994. [22] Y.E. Nesterov and MJ Todd. On the Riemannian geometry defined by self-concordant barriers and interior-point methods. Foundations of Computational Mathematics, 2(4):333?361, 2008. [23] A. Rakhlin. Lecture notes on online learning, 2008. http://stat.wharton.upenn.edu/?rakhlin/papers/online learning.pdf. [24] D. Shah and J. Shin. Dynamics in congestion games. In Proceedings of ACM Sigmetrics, 2010. [25] S. Shalev-Shwartz and Y. Singer. Convex repeated games and fenchel duality. In NIPS. 2007. [26] S. Vempala. Geometric random walks: A survey. In Combinatorial and computational geometry. Math. Sci. Res. Inst. Publ, 52:577?616, 2005. [27] V. Vovk. Aggregating strategies. In Proceedings of the Third Annual Workshop on Computational Learning Theory, pages 372?383. Morgan Kaufmann, 1990. [28] V. Vovk. Competitive on-line statistics. International Statistical Review, 69:213?248, 2001. [29] K. Yamanishi. Minimax relative loss analysis for sequential prediction algorithms using parametric hypotheses. In COLT? 98, pages 32?43, New York, NY, USA, 1998. ACM. [30] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003. 9
3972 |@word multitask:2 polynomial:5 seems:1 norm:7 d2:2 additively:1 forecaster:5 linearized:2 covariance:3 recursively:1 reduction:5 initial:2 contains:1 existing:1 current:1 discretization:2 dikin:8 yet:2 dx:6 intriguing:1 written:1 additive:1 partition:2 shape:2 stationary:4 congestion:1 warmuth:2 provides:1 math:1 gx:3 direct:1 prove:2 x0:2 lov:6 upenn:2 indeed:2 expected:4 n23:1 revision:1 estimating:1 bounded:12 moreover:1 hindsight:1 guarantee:3 every:2 concave:2 growth:3 exactly:1 ensured:1 prohibitively:2 hit:2 scaled:2 control:2 unit:2 appear:2 before:2 local:4 todd:2 tends:2 aggregating:1 consequence:2 path:1 lugosi:2 black:1 might:1 examined:1 limited:2 projective:1 range:1 unique:1 regret:23 block:2 dpx:2 procedure:5 shin:1 universal:3 projection:1 outset:1 integrating:1 cannot:1 close:6 interior:7 put:1 context:5 applying:1 py:5 measurable:6 deterministic:4 demonstrated:1 map:1 zinkevich:1 go:1 attention:1 convex:30 survey:2 ergodic:1 simplicity:1 assigns:1 proving:3 handle:1 notion:1 variation:1 target:1 play:2 suppose:4 programming:4 hypothesis:1 origin:1 pa:1 updating:1 role:2 solved:1 ensures:1 chord:1 nesterov:3 dynamic:2 isoperimetric:4 depend:1 solving:1 segment:1 triangle:1 easily:1 regularizer:1 separated:1 fast:1 outside:1 choosing:2 abernethy:3 shalev:1 whose:7 emerged:1 larger:1 solve:1 dominating:1 say:2 otherwise:2 statistic:1 online:15 advantage:2 sequence:5 propose:1 product:1 remainder:1 turned:1 combining:1 rapidly:5 date:1 mixing:9 dirac:1 exploiting:1 convergence:1 produce:2 yamanishi:1 spent:1 stat:1 qt:7 job:1 eq:2 implemented:1 involves:1 implies:2 guided:2 closely:2 stochastic:3 hull:1 centered:1 computationallyefficient:1 implementing:1 cn3:1 suffices:1 hold:2 around:1 considered:1 exp:7 driving:1 major:1 estimation:2 combinatorial:1 bridge:1 weighted:4 minimization:4 unfolding:1 mit:2 dani:1 gaussian:5 sigmetrics:1 aim:1 kalai:2 varying:6 corollary:3 encode:1 derived:1 focus:1 l0:1 june:1 hk:5 inst:1 abstraction:1 relation:1 bandit:2 interested:1 provably:2 issue:2 among:1 colt:2 development:1 breakthrough:1 special:1 integration:1 wharton:2 aware:1 having:2 reversibility:1 sampling:13 ng:1 icml:1 simplex:2 others:1 oblivious:1 few:1 frieze:2 divergence:6 individual:1 beck:1 geometry:6 ourselves:1 ab:1 conductance:12 mixture:3 behind:1 damped:1 har:1 chain:12 bregman:1 closer:1 necessary:3 xy:1 respective:1 logarithm:2 walk:26 desired:1 re:2 littlestone:1 fenchel:2 instance:3 teboulle:1 cost:10 vertex:3 subset:5 uniform:5 too:1 chooses:1 st:6 density:7 international:1 randomized:5 siam:1 together:4 analogously:1 unavoidable:1 satisfied:1 containing:1 cesa:1 slowly:1 worse:1 expert:2 inefficient:1 derivative:1 li:2 concordance:2 oper:1 syst:1 int:5 satisfy:1 later:1 view:3 h1:4 multiplicative:2 hazan:1 sup:4 start:1 competitive:1 parallel:1 defer:1 hariharan:1 accuracy:1 kaufmann:1 who:1 efficiently:1 yield:3 bayesian:3 unifies:1 worth:1 reach:1 canceling:1 whenever:3 checked:1 infinitesimal:1 e2:3 dm:2 proof:10 riemannian:6 knowledge:2 lim:3 organized:1 hilbert:2 back:1 attained:1 dt:4 follow:2 improved:1 sufficiency:2 done:2 box:1 generality:2 furthermore:1 stage:1 sketch:2 nonlinear:1 reversible:2 brings:1 infimum:1 usa:1 building:2 rectifiable:1 evolution:1 regularization:3 hence:3 equality:2 boucheron:1 leibler:1 q0:13 round:4 game:7 self:8 during:1 recurrence:1 whereby:1 coincides:1 generalized:1 pdf:1 novel:2 recently:1 ef:1 mt:1 volume:7 cambridge:3 gibbs:3 grid:1 consistency:1 mathematics:3 portfolio:2 similarity:2 pu:11 posterior:1 closest:2 recent:1 inf:6 apart:1 driven:1 certain:2 inequality:7 dpy:1 accomplished:1 seen:1 minimum:1 morgan:1 recognized:1 relates:1 multiple:1 mix:4 hannan:2 d0:1 match:1 calculation:1 offer:1 cross:1 specializing:1 prediction:4 metric:12 represent:1 deterioration:1 achieved:1 agarwal:1 c1:4 decreased:1 annealing:2 else:1 crucial:1 appropriately:2 unlike:1 exhibited:1 asz:6 posse:2 massart:1 logconcave:7 tend:1 ascent:1 seem:1 ee:1 near:2 ideal:1 iii:1 enough:2 easy:3 pennsylvania:1 restrict:1 competing:1 inner:1 cn:3 translates:1 det:1 whether:1 bartlett:2 forecasting:1 hessian:4 york:1 remark:1 detailed:1 dark:1 locally:1 narayanan:3 simplest:1 http:1 dotted:1 delta:1 per:2 track:1 write:1 shall:1 key:1 subgradient:1 run:2 prob:1 letter:1 striking:1 family:2 sobolev:1 decision:4 comparable:1 bound:16 ct:1 followed:1 quadratic:1 annual:3 adapted:1 infinity:2 precisely:1 n3:6 sake:1 argument:1 min:6 concluding:1 infinitesimally:1 vempala:6 px:8 according:2 ball:3 sam:1 kakade:2 n4:2 s1:47 intuitively:1 taken:1 computationally:3 ln:3 discus:1 singer:1 dyer:2 end:1 generalizes:1 operation:2 appropriate:1 coin:1 struct:1 shah:1 newton:1 konda:1 build:1 establish:1 prof:1 approximating:1 pled:1 tensor:2 objective:1 question:1 quantity:1 strategy:9 concentration:1 dependence:3 usual:1 parametric:1 gradient:1 distance:7 simulated:2 sci:2 majority:2 topic:2 kannan:3 length:1 ellipsoid:4 ratio:1 minimizing:1 balance:1 statement:5 stoc:1 stated:2 negative:1 publ:1 zt:8 twenty:1 bianchi:1 upper:2 observation:2 markov:11 descent:2 immediate:1 extended:1 head:1 nemirovskii:2 locate:1 rn:4 community:1 inverting:1 pair:1 required:1 complement:1 kl:1 connection:1 established:1 polytopes:1 nip:2 beyond:1 able:1 adversary:1 below:2 appeared:1 max:1 natural:3 force:1 regularized:2 difficulty:2 warm:1 telescoping:1 minimax:3 representing:1 imply:2 carried:2 philadelphia:2 faced:1 prior:7 understanding:1 geometric:2 literature:1 l2:3 tangent:1 review:1 relative:2 loss:10 lecture:1 bear:1 mixed:2 sublinear:1 proven:1 remarkable:1 ingredient:1 foundation:1 affine:1 consistent:1 playing:1 summary:1 surprisingly:1 supported:5 drastically:1 tsitsiklis:1 wide:1 barrier:4 absolute:2 distributed:1 feedback:2 dimension:1 axi:1 stand:1 cumulative:3 transition:2 lett:1 author:2 collection:1 reinforcement:1 made:1 projected:1 far:1 compact:1 kullback:1 supremum:1 dealing:1 reveals:1 simonovits:3 hayes:1 leader:2 xi:4 shwartz:1 continuous:3 decade:1 nature:3 mj:1 rearranging:2 pending:1 bearing:1 investigated:1 main:5 azoury:1 bounding:3 s2:37 arise:1 repeated:4 fair:1 pivotal:1 body:8 x1:3 advice:1 gxt:1 ny:1 exponential:4 comput:1 lie:1 third:2 theorem:20 emphasizing:1 z0:1 xt:28 rakhlin:7 dk:3 r2:2 exists:1 workshop:1 sequential:1 corr:1 gained:1 importance:1 mirror:2 horizon:1 gap:1 entropy:1 lt:6 simply:1 likely:1 twentieth:1 jacm:1 lazy:1 tracking:1 dh:7 ma:2 acm:3 comparator:3 goal:1 consequently:1 toss:1 lipschitz:4 price:1 hard:1 onestep:1 adverse:1 change:1 vovk:2 averaging:1 lemma:23 called:2 total:1 duality:3 accepted:1 concordant:6 player:3 latter:1 alexander:1 brevity:2
3,282
3,973
Scrambled Objects for Least-Squares Regression Odalric-Ambrym Maillard and R?emi Munos SequeL Project, INRIA Lille - Nord Europe, France {odalric.maillard, remi.munos}@inria.fr Abstract We consider least-squares regression using a randomly generated subspace GP ? F of finite dimension P , where F is a function space of infinite dimension, e.g. L2 ([0, 1]d ). GP is defined as the span of P random features that are linear combinations of the basis functions of F weighted by random Gaussian i.i.d. coefficients. In particular, we consider multi-resolution random combinations at all scales of a given mother function, such as a hat function or a wavelet. In this latter case, the resulting Gaussian objects are called scrambled wavelets and we show that they enable to approximate functions in Sobolev spaces H s ([0, 1]d ). As a result, given N data, the least-squares estimate gb built from P scrambled wavelets has excess risk ||f ? ? gb||2P = O(||f ? ||2H s ([0,1]d ) (log N )/P + P (log N )/N ) for target functions f ? ? H s ([0, 1]d ) of smoothness order s > d/2. An interesting aspect of the resulting bounds is that they do not depend on the distribution P from which the data are generated, which is important in a statistical regression setting considered here. Randomization enables to adapt to any possible distribution. We conclude by describing an efficient numerical implementation using lazy ex? d N 3/2 log N + N 2 ), where d is the dipansions with numerical complexity O(2 mension of the input space. 1 Introduction We consider ordinary least-squares regression using randomly generated feature spaces. Let us first describe the general regression problem: we observe data DN = ({xn , yn }1?n?N ) (with xn ? X a compact subset of Rd , and yn ? R), assumed to be independently and identically distributed (i.i.d.) with xn ? P and yn = f ? (xn ) + ?n , where f ? is the (unknown) target function, such that ||f ? ||? ? L, and ?n is a centered, independent noise of variance bounded by ? 2 . We assume that L and ? are known. Now, for a given class of functions F, and f ? F, we define the empirical `2 -error def LN (f ) = N 1 X [yn ? f (xn )]2 , N n=1 and the generalization error def L(f ) = EX,Y [(Y ? f (X))2 ]. The goal is to return a regression function fb ? F with lowest possible generalization error L(fb). The excess risk L(fb)?L(f ? ) = ||f ? ? fb||P (where ||g||2P = EX?P [g(X)2 ]) measures the closeness to optimality. In this paper we consider infinite dimensional spaces F that are generated by a denumerable family of functions {?i }i?1 , called initial features (such as wavelets). We will assume that f ? ? F . 1 Since F is an infinite dimensional space, the empirical risk minimizer in F is certainly subject to overfitting. Traditional methods to circumvent this problem have considered penalization, i.e. one searches for a function in F which minimizes the empirical error plus a penalty term, for example fb = arg minf ?F LN (f ) + ?||f ||pp for p = 1 or 2, where ? is a parameter and usual choices for the norm are `2 (ridge-regression [17]) and `1 (LASSO [16]). In this paper we follow an alternative approach introduced in [10], called Compressed Least Squares Regression, which considers generating randomly a subspace GP (of finite dimension P ) of F, and then returning the empirical risk minimizer in GP , i.e. arg ming?GP LN (g). This previous work considered the case when F is of finite dimension. Here we consider specific cases of infinite dimensional spaces F and provide a characterization of the resulting approximation spaces. 2 Regression with random spaces Let us briefly recall the method described in [10] and extend it to the case of infinite dimensional spaces F. In this paper we assume that the set of features (?i )i?1 are continuous and are such that, X def ?i (x)2 . sup ||?(x)||2 < ?, where ||?(x)||2 = (1) x?X i?1 Examples of feature spaces satisfying this property include rescaled wavelets and will be described in Section 3. The random subspace GP is generated by building a set of P random features (?p )1?p?P defined as linear combinations of the initial features {?i }1?1 weighted by random coefficients: X def ?p (x) = Ap,i ?i (x), for 1 ? p ? P, (2) i?1 where the (infinitely many) coefficients Ap,i are drawn i.i.d. from a centered distribution with variance 1/P . Here we explicitly choose a Gaussian distribution N (0, 1/P ). Such a definition of the features ?p as an infinite sum of random variable is not obvious (this is called an expansion of a Gaussian object) and we refer to [11] for elements of theory about Gaussian objects and for the expansion of a Gaussian object. It is shown that under assumption (1), the random features are well defined. Actually, they are random samples of a centered Gaussian process indexed by the P space X with covariance structure given by P1 h?(x), ?(x0 )i, where we use the notation hu, vi = i ui vi for two square-summable sequences u and v. Indeed, EAp [?p (x)] = 0, and 1 X 1 CovAp (?p (x), ?p (x0 )) = EAp [?p (x)?p (x0 )] = ?i (x)?i (x0 ) = h?(x), ?(x0 )i . P P i?1 The continuity of the initial features (?i ) guarantees that there exists a continuous version of the process ?p which is thus a Gaussian process. Then we define GP ? F to be the (random) vector space spanned by those features, i.e. def def GP = {g? (x) = P X ?p ?p (x), ? ? RP }. p=1 Now, the least-squares estimate g?b ? GP is the function in GP with minimal empirical error, i.e. g?b = arg min LN (g? ), g? ?GP (3) and is the solution of a least-squares regression problem, i.e. ?b = ?? Y ? RP , where ? is the def N ? P -matrix composed of the elements: ?n,p = ?p (xn ), and ?? is the Moore-Penrose pseudoinverse of ?1 . The final prediction function gb(x) is the truncation (to the threshold ?L) of g?b, i.e. ? u if |u| ? L, def def gb(x) = TL [g?b(x)], where TL (u) = L sign(u) otherwise. Next, we provide bounds on the approximation error of f ? in GP and deduce excess risk bounds. 1 In the full rank case when N ? P , ?? = (?T ?)?1 ?T 2 2.1 Approximation error We now extend the result of [10] and derive approximation error bounds both in expectation and in high probability. We restrict the set of target functions to belong to the approximation space K ? F (also identified to the kernel space associated to the expansion of a Gaussian object): X def def K = {f? ? F, ||?||2 = ?i2 < ?}. (4) i?1 Remark 1. This space may be seen from two equivalent points of view: either as a set of functions that are random linear combinations of the initial features, or a set of functions that are the expectation of some random processes (interpretation in terms of kernel space). We will not develop the related theory of Gaussian processes here but we refer the reader interested in the construction of kernel spaces to [11] P Let f? = i ?i ?i ? K. Write g ? the projection of f? onto GP w.r.t. the norm || ? ||P , i.e. g ? = arg ming?GP ||f? ? g||P , and g?? = TL g ? its truncation at the threshold L ? ||f? ||? . Notice that due to the randomness of the features (?p )1?p?P of GP , the space GP is also random, and so is g?? . The following result provides bounds for the approximation error ||f? ? g?? ||P both in expectation and in high probability. p Theorem 1. For any ? > 0, whenever P ? c1 log(P ? 2 log(1/?)/?), we have with probability 1 ? ? (w.r.t. the choice of the random subspace GP ), p ? ||?||2 supx ||?(x)||2 ? inf ||f ? ? TL (g)||2P ? c2 1 + log(P ? 2 log(1/?)/?) , g?G P where ? = ||?|| supL ||?(x)|| and c1 , c2 are some universal constants (see [11]). A similar result holds x in expectation. This result relies on the property that inf g?GP ||f? ? g||P ? ||f? ? gA? ||P and that gA? , considered as a random variable w.r.t. the choice of the random elements A, concentrates around f? (in || ? ||P norm) when P increases. Indeed, gA? (x) = (A?) ? ?(x) = (A?) ? (A?(x)) which is close to ? ? ?(x) = f? (s), since inner-products are approximately preserved through random projections (from a variant of Johnson-Lindenstrauss (JL) Lemma). The proof of Theorem 1 (provided in Appendix of [11]) relies in generating auxiliary samples X10 , . . . , XJ0 from P, applying JL Lemma at those points and combining it with a Chernoff-Hoeffding bound for generalizing the result to hold in ||?||P -norm. Remark 2. An interesting property of this result is that the bound does not depend on the distribution P. This distribution is used in the definition of the norm || ? ||P to assess how well a function space GP can approximate a function f? . It is thus surprising that the measure P does not appear in the bound. Actually, the fact that GP is random enables it to be close to f? (in high probability or in expectation) whatever the measure P is. This is especially interesting in a regression setting where the distribution P from which the data are generated is not known in advance. 2.2 Excess risk bounds We now combine the approximation error bound from Theorem 1 with usual P estimation error bounds for linear spaces (see e.g. [7]). Let us consider a target function f ? = i ?i? ?i ? K. Remember def that our prediction function gb is the truncation gb = TL [g?b] of the (ordinary) least-squares estimate g?b (empirical risk minimizer in the random space GP ) defined by (3). We now provide upper bounds (both in expectation and in high probability) on the excess risk for the least-squares estimate using random subspaces (the proof is given in [11]). Theorem 2. Whenever P ? c3 log N , we have the following bound in expectation (w.r.t. all sources of randomness, i.e. input data, noise, and the choice of the random features): ? ? P P log N log N ? 2 + ||? || sup ||?(x)||2 , (5) EGP ,X,Y ||f ? ? gb||2P ? c4 ? 2 + L2 N N P x Now, for any ? > 0, whenever P ? c5 log(N/?), we have the following bound in high probability (w.r.t. the choice of the random features), where c3 , c4 , c5 , c6 are universal constant (see [11]): ? ? P P log N log N/? ? 2 + ||? || sup ||?(x)||2 . (6) EX,Y ||f ? ? gb||2P ? c6 ? 2 + L2 N N P x 3 The results of Theorems 1 and 2 say that if the term ||?? ||2 supx ||?(x)||2 is small, then the leastsquares estimate in the random subspace GP has low excess risk. The question we wish to address now is whether we can define spaces for which this is the case. In the next section we provide two examples of feature spaces and characterize the space of functions for which this term is controlled. 3 Regression with Scrambled Objects In the two examples provided below we consider (infinitely many) initial features that are translations and rescaling of a given mother function (which is assumed to be continuous) at all scales. Thus each random feature ?p is a Gaussian object based on a multi-scale scheme built from an object (the mother function), and will be called a ?scrambled object?, to refer to the disorderly construction of this multi-resolution random process. We thus propose to solve the regression problem by ordinary Least Squares on the (random) approximation space defined by the span of P such scrambled objects. In the next sections we provide two examples. The first one considers the case when the mother function is a hat function and we show that the corresponding scrambled objects are Brownian motions. The second example considers wavelets. The proof of bounds (7) and (8) can be found in [11]. 3.1 Brownian motions and Brownian Sheets Dimension 1: We start with the 1-dimensional case where X = [0, 1]. Let us choose as object (mother function) the hat function ?(x) = xI[0,1/2[ + (1 ? x)I[1/2,1[ . We define the (infinite) set of initial features as translated and rescaled hat functions: ?j,l (x) = 2?j/2 ?(2j x ? l) for any scale j ? 1 and translation index 0 ? l ? 2j ? 1. We also write ?0,0 (x) = x. This defines a basis of the space of continuous functions C 0 ([0, 1]) equal to 0 at 0 (introduced by Faber in 1910, and known as the Schauder basis, see [8] for an interesting overview). Those functions are indexed by the scale j and translation index l, but all functions may be equivalently indexed by a unique index i ? 1. We have the property that the random features ?p (x), defined as linear combinations of those hat functions weighted by Gaussian i.i.d. random numbers, are Brownian motions (See Example 1 of [11] for the proof). In addition, we can characterize the corresponding kernel space K, which is the Sobolev space H 1 ([0, 1]) of order 1 (space of functions which have a weak derivative in L2 ([0, 1])). Dimension d: For the extension to dimension d, we define the initial features as the tensor product ?j,l of one-dimensional hat functions (thus j and l are multi-indices). The random features ?p (x) are Brownian sheets (extensions of Brownian motions to several dimensions) and the corresponding kernel K is the so-called Cameron-Martin space [9], endowed with the norm d f ||f ||K = || ?x1?...?x || 2 (see also Example 1 of [11] for the proof). One may interpret d d L ([0,1] ) d f this space as the set of functions which have a d-th order crossed (weak) derivative ?x1?...?x in d 2 d L ([0, 1] ), vanishing on the ?left? boundary (edges containing 0) of the unit d-dimensional cube. Note that in dimension d > 1, this space differs from the Sobolev space H 1 . Regression with PBrownian Sheets: When one uses Brownian sheets for regression with a target function f ? = i ?i? ?i that lies in the Cameron-Martin space K defined previously (i.e. such that ||?? || < ?), then the term ||?? ||2 supx?X ||?(x)||2 that appears in Theorems 1 and 2 is bounded as: ||?? ||2 sup ||?(x)||2 ? 2?d ||f ? ||2K . x?X Thus, from Theorem 2, ordinary least-squares performed on random subspaces spanned by P Brownian sheets has an expected excess risk ? log N log N ? 2 ? P+ ||f ||K , (7) EGP ,X,Y ||f ? ? gb||2P = O N P (and a similar bound holds in high probability). 4 3.2 Scrambled Wavelets in [0, 1]d We now introduce a second example built from a family of orthogonal wavelets (???,j,l ) ? C q ([0, 1]d ) (where ? ? {0, 1}d is a multi-index, j is a scale index, l a multi-index, see [2, 12] for details of the notations) with at least q > d/2 vanishing moments. Now for s ? (d/2, q), we dedef ? ? ?,j,l fine the initial features (??,j,l ) as the rescaled wavelets (???,j,l ), i.e. ??,j,l = 2?js ||???,j,l ||2 . Again, the initial features may equivalently be indexed by a unique index i ? 1. The random features ?p defined from (2) are P called ?scrambled wavelets?. It can be shown that the resulting approximation space K (i.e. {f? = i ?i ?i , ||?|| < ?) is the Sobolev space H s ([0, 1]d ). Regression with Scambled Wavelets: Assume that the mother wavelet P ?? has compact support [0, 1]d and is bounded by ?, and assume that the target function f ? = i ?i? ?i lies in the Sobolev space H s ([0, 1]d ) with s > d/2 (i.e. such that ||?? || < ?). Then, we have, ?2d (2d ? 1) ||?? ||2 sup ||?(x)||2 ? ||f ? ||2H s ([0,1]d ) . 1 ? 2?2(s?d/2) x?X Thus from Theorem 2, ordinary least-squares performed on random subspaces spanned by P scrambled wavelets has an expected excess risk ? log N ? log N ? 2 EGP ,X,Y ||f ? ? gb||2P = O P+ ||f ||H s ([0,1]d ) , (8) N P (and a similar bound holds in high probability). ? In both examples, by choosing P of order N ||f ? ||K , one deduces the excess risk ? ||f ? || log N ? ?K E||f ? ? gb||2P = O . (9) N 3.3 Remark about randomized spaces Note that the bounds on the excess risk obtained in (7), (8), and (9) do not depend on the distribution P under which the data are generated. This is crucial in our setting since P is usually unknown. It should be noticed that this property does not hold when one considers non-randomized approximation spaces. Indeed, it is relatively easy to exhibit a particularly well-chosen set of features ?i that will approximate functions in a given class using a particular measure P. For example when P = ?, the Lebesgue measure, and f ? ? H s ([0, 1]d ) (with s > d/2), then linear regression using wavelets (with at least d/2 vanishing moments), which form an orthonormal basis of L2,? ([0, 1]d ), enables to achieve a bound similar to (8). However, this is no more the case when P is not the Lebesgue measure and it seems difficult to modify the features ?i in order to recover the same bound, even when P is known. This seems to be even harder when P is arbitrary and not known in advance. Randomization enables to define approximation spaces such that the approximation error (either in expectation or in high probability on the choice of the random space) is controlled, whatever the measure P used to assess the performance (even when P is unknown) is. For illustration, consider a very peaky (a spot) distribution P in a high-dimensional space X . Regular linear approximation, say with wavelets (see e.g. [6]), will most probably miss the specific characteristics of f ? at the spot, since the first wavelets have large support. On the contrary, scrambled wavelets, which are functions that contain (random combinations of) all wavelets, will be able to detect correlations between the data and some high frequency wavelets, and thus discover relevant features of f ? at the spot. This is illustrated in the numerical experiment below. Here P is a very peaky Gaussian distribution and f ? is a 1-dimensional periodic function. We consider as initial features (?i )i?1 the set of hat functions defined in Section 3.1. Figure 3.3 shows the target function f ? , the distribution P, and the data (xn , yn )1?n?100 (left plots). The middle plots represents the least-squares estimate gb using P = 40 scrambled objects (?p )1?p?40 (here Brownian motions). The right plots shows the least-squares estimate using the initial features (?i )1?i?40 . The top figures represent a high level view of the whole domain [0, 1]. No method is able to learn f ? on the whole space (this is normal since the available data are only generated from a peaky distribution). The bottom figures shows a zoom [0.45, 0.51] around the data. Least-squares regression using scrambled objects is able to learn the structure of f ? in terms of the measure P. 5 Predicted function: BLSR_Hat Target function Predicted function: LSR_Hat 1.0 1.0 1.0 0.8 0.8 0.5 0.6 0.6 0.4 0.4 0.0 0.2 0.2 0.0 0.0 -0.5 -0.2 -0.2 -0.4 -0.4 0.0 0.2 0.4 0.6 0.8 1.0 -1.0 0.0 0.2 0.4 0.6 0.8 1.0 -0.6 0.0 0.2 Predicted function: BLSR_Hat Target function 0.6 0.8 1.0 Predicted function: LSR_Hat 1.0 1.0 0.4 1.0 0.8 0.8 0.5 0.6 0.6 0.4 0.4 0.0 0.2 0.2 0.0 0.0 -0.5 -0.2 -0.2 -0.4 -0.4 0.45 0.46 0.47 0.48 0.49 0.50 0.51 -1.0 0.45 0.46 0.47 0.48 0.49 0.50 0.51 -0.6 0.45 0.46 0.47 0.48 0.49 0.50 0.51 ? Figure 1: LS estimate of f using N = 100 data generated from a peaky distribution P (left plots), using 40 Brownian motions (?p ) (middle plots) and 40 hat functions (?i ) (right plots). The bottom row shows a zoom around the data. 4 Discussion ? ?1/2 ) deduced in (9), does not depend on Minimax optimality: Note that although the rate O(N the dimension d of the input data X , it does not contradict the known minimax lower bounds, which are ?(N ?2s/(2s+d) ) for functions defined over [0, 1]d that possess s-degrees of smoothness (e.g. that are s-times differentiable), see e.g. Chapter 3 of [7]. Indeed, the kernel space K is composed of functions whose order of smoothness may depend on d. For illustration, in the case of scrambled wavelets, the kernel space is the Sobolev space H s ([0, 1]d ) with s > d/2. Thus 2s/(2s + d) > 1/2. Notice that if one considers wavelets with q vanishing moments, where q > d/2, then one may choose s (such that q > s > d/2) arbitrarily close to d/2, and deduce that the excess risk rate ? ?1/2 ) deduced from Theorem 2 is arbitrarily close to the minimax lower rate. Thus regression O(N using scrambled wavelets is minimax optimal (up to logarithmic factors). Now, concerning Brownian sheets, we are not aware of minimax lower bounds for Cameron-Martin spaces, thus we do not know whether regression using Brownian sheets is minimax optimal or not. Links with RKHS Theory: There are strong links between the kernel space of Gaussian objects (see eq.(4)) and Reproducing Kernel Hilbert Spaces (RKHS). We now remind two properties that illustrate those links: ? Kernel spaces of Gaussian objects can be built using a Carleman operator, i.e.Ra linear injective mapping J : H 7? S (where H is a Hilbert space) such that J(h)(t) = ?t (s)h(s)ds where (?t )t is a collection of functions of H. There is a bijection between Carleman operators and the set of RKHSs [4, 15]. ? Expansion of a Mercer kernel. The expansion of a Mercer kernel k (i.e. when X is comP? pact Haussdorff and k is a continuous kernel) is given by k(x, y) = i=1 ?i ei (x)ei (y), where (?i )i and (ei )i are the eigenvalues and eigenfunctions of the integral operator R Lk : L2,? (X ) ? L2,? (X ) defined by (Lk (f ))(x) = X k(x, y)f (y)d?(y). The asso? P P 2 ciated RKHS is K = {f = ?i ei , endowed i ? i ?i ; i ?i < ?}, where ?i = with the inner product hf? , f? i = h?, ?il2 . This space is thus also the kernel space of the Gaussian object as defined by (4). 6 The expansion of a Mercer kernel gives an explicit construction of the functions of the RKHS. However it may not be straightforward to compute the eigenvalues and eigenfunctions of the integral operator Lk and thus the basis functions ?i in the general case. The approach described in this paper enables to choose explicitly the initial basis functions, and build the corresponding kernel space. For example we have presented examples of expansions using multiresolution bases (such as hat functions and wavelets), which is not easy to obtain from the Mercer expansion. This is interesting because from the choice of the initial basis, we can characterize the corresponding approximations spaces (e.g. Sobolev space in the case of wavelets). Another more practical benefit is that by using multi-resolution bases (with compact mother function), we can derive efficient numerical implementations, as described in Section 5. Related works In [14, 13], the authors consider, for a given parameterized function ? : X ? ? ? R bounded by 1, and a probability measure ? over ?, the space F of functions f (x) = R ?(?)?(x, ?)d? such that ||f ||? = sup? | ?(?) ?(?) | < ?. They show that this is a dense subset ? R of the RKHS with kernel k(x, y) = ? ?(?)?(x, ?)?(y, ?)d?, and that if f ? F , then with high PP i.i.d probability over (?p )p?P ? ?, there exist coefficients (cp )p?P such that fb(x) = cp ?(x, ?p ) p=1 ||f || satisfies ||fb ? f ||22 ? O( ?P? ). The method is analogous to the construction of the empirical estimates gA? ?PGP of function f? ? K in our setting. Indeed we may formally identify ?(x, ?p ) with ?p (x) = i Ap,i ?i (x), ?p with the sequence (Ap,i )i , and the law ? with the law of this infinite sequence. However, in our setting we do not require the condition supx,? ?(x, ?) ? 1 to hold and the fact that ? is a set of infinite sequences makes the identification tedious without the Gaussian random functions theory used here. Anyway, we believe that this link provides a better mutual understanding of both approaches (i.e. [14] and this paper). In the work [1], the authors provide excess risk bounds for greedy algorithms (i.e. in a non-linear approximation setting). The bounds derived in their Theorem 3.1 is similar to the result stated in our Theorem 2. The main difference is that their bound makes use of the l1 norm of the coefficients ?? instead of the l2 norm in our setting. It would be interesting to further investigate whether this difference is a consequence of the non-linear aspect of their approximation or if it results from the different assumptions made about the approximation spaces, in terms of rate of decrease of the coefficients. 5 Efficient implementation using a lazy multi-resolution expansion In practice, in order to build the least-squares estimate, one needs to compute the values of the random features (?p )1?p?P at the data points (xn )1?n?N , i.e. the matrix ? = (?p (xn ))p?P,n?N . Due to finite memory and precision of computers, numerical implementations can only handle a finite number F of initial features (?i )1?i?F . In [10] it was mentioned that the computation of ?, which makes use of the random matrix A = (Ap,i )p?P,i?F , has a complexity O(F P N ). However, in the multi-resolution schemes described here, provided that the mother function has compact support (such as the hat functions or the Daubechie wavelets), we can significantly speed up the computation of the matrix ? by using a tree-based lazy expansion, i.e. where the expansion of the random features (?p )p?P is built only when needed for the evaluation at the points (xn )n . Consider the example of the scrambled wavelets. In dimension 1, using a wavelet dyadic-tree of depth H (i.e. F = 2H+1 ), the numerical cost for computing ? is O(HP N ) (using one tree per random feature). Now, in dimension d the classical extension of one-dimensional wavelets uses a family of 2d ? 1 wavelets, thus requires 2d ? 1 trees each one having 2dH nodes. While the resulting number of initial features F is of order 2d(H+1) , thanks to the lazy evaluation (notice that one never computes all the initial features), one needs to expand at most one path of length H per training point, and the resulting complexity to compute ? is O(2d HP N ). Note that one may alternatively use the so-called sparse-grids instead of wavelet trees, which have been introduced by Griebel and Zenger (see [18, 3]). The main result is that one can reduce significantly the total number of features to F = O(2H H d ) (while preserving a good approximation for sufficiently smooth functions). Similar lazy evaluation techniques can be applied to sparse-grids. 7 Now, using a finite F introduces an additional approximation (squared) error term in the final excess 2s risk bounds or order O(F ? d ) for a wavelet basis adapted to H s ([0, 1]d ). This additional error (due to the numerical approximation) can be made arbitrarily small, e.g. o(N ?1/2 ), whenever H ? logdN . ? Thus, using P = O( N ) random features, we deduce that the complexity of building the matrix ? is O(2d N 3/2 log N ). Then in order to solve the least squares system, one has to compute ?T ?, that has numerical cost O(P 2 N ), and then solve the system by inversion, which has numerical cost O(P 2.376 ) by [5]. Thus, the overall cost of the algorithm is O(2d N 3/2 log N + N 2 ). 6 Conclusion and future works We analyzed least-squares regression using sub-spaces GP that are generated by P random linear combinations of infinitely many initial features. We showed that the approximation space K = {f? , ||?|| < ?} (which is also the kernel space of the related Gaussian object) provides a characterization of the set of target functions f ? for which this random regression works. We illustrated the approach on two examples for which the approximation space is a known functional space, namely a Cameron-Martin space when the random features are Brownian sheets (generated by random combinations at all scales of a hat function), and a Sobolev space in the case of scrambled wavelets. We derived a general approximation error result from which we deduced excess risk bounds of order O( logNN P + logP N ||f ? ||2K ). We showed that least-squares regression with scrambled wavelets provides rates that are arbitrarily close to minimax optimality. However in the case of regression with Brownian sheets, we are not aware of minimax lower bounds for Cameron-Martin spaces in dimension d > 1. We discussed a key aspect of randomized approximation spaces which is that the approximation error can be controlled independently of the measure P used to assess the performance. This is essential in a regression setting where P is unknown, and excess risk rates independent of P are obtained. We concluded by mentioning a nice property of using multiscale objects like Brownian sheets and scrambled wavelets (with compact mother wavelet) which is the possibility to be efficiently implemented. We described a lazy expansion approach for computing the regression function which has a numerical complexity O(N 2 + 2d N 3/2 log N ). A limitation of the current scrambled wavelets is that, so far, we did not consider refined analysis for spaces H s with large smoothness s ? d/2. Possible directions for better handling such spaces may involve refined covering number bounds which will be the object of future works. Acknowledgment This work has been supported by French National Research Agency (ANR) through COSINUS program (project EXPLO-RA number ANR-08-COSI-004). 8 References [1] Andrew Barron, Albert Cohen, Wolfgang Dahmen, and Ronald Devore. Approximation and learning by greedy algorithms. 36:1:64?94, 2008. [2] Gerard Bourdaud. Ondelettes et espaces de besov. Rev. Mat. Iberoamericana, 11:3:477?512, 1995. [3] Hans-Joachim Bungartz and Michael Griebel. Sparse grids. In Arieh Iserles, editor, Acta Numerica, volume 13. University of Cambridge, 2004. [4] St?ephane Canu, Xavier Mary, and Alain Rakotomamonjy. Functional learning through kernel. arXiv, 2009, October. [5] D. Coppersmith and S. Winograd. Matrix multiplication via arithmetic progressions. In STOC ?87: Proceedings of the nineteenth annual ACM symposium on Theory of computing, pages 1?6, New York, NY, USA, 1987. ACM. [6] R. DeVore. Nonlinear Approximation. Acta Numerica, 1997. [7] L. Gy?orfi, M. Kohler, A. Krzy?zak, and H. Walk. A distribution-free theory of nonparametric regression. Springer-Verlag, 2002. [8] St?ephane Jaffard. D?ecompositions en ondelettes. In Development of mathematics 1950?2000, pages 609?634. Birkh?auser, Basel, 2000. [9] Svante Janson. Gaussian Hilbert spaces. Cambridge Univerity Press, Cambridge, UK, 1997. [10] Odalric-Ambrym Maillard and R?emi Munos. Compressed Least-Squares Regression. In NIPS 2009, Vancouver Canada, 2009. [11] Odalric-Ambrym Maillard and R?emi Munos. Linear regression with random projections. Technical report, Hal INRIA: http://hal.archives-ouvertes.fr/inria-00483014/, 2010. [12] Stephane Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1999. [13] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In John C. Platt, Daphne Koller, Yoram Singer, Sam T. Roweis, John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, NIPS. MIT Press, 2007. [14] Ali Rahimi and Benjamin Recht. Uniform approximation of functions with random bases. 2008. [15] S. Saitoh. Theory of reproducing Kernels and its applications. Longman Scientific & Technical, Harlow, UK, 1988. [16] Robert Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Series B, 58:267?288, 1994. [17] A. N. Tikhonov. Solution of incorrectly formulated problems and the regularization method. Soviet Math Dokl 4, pages 1035?1038, 1963. [18] C. Zenger. Sparse grids. In W. Hackbusch, editor, Parallel Algorithms for Partial Differential Equations, Proceedings of the Sixth GAMM-Seminar, volume 31 of Notes on Num. Fluid Mech., Kiel, 1990. Vieweg-Verlag. 9
3973 |@word version:1 briefly:1 middle:2 norm:8 seems:2 inversion:1 tedious:1 hu:1 egp:3 covariance:1 harder:1 moment:3 initial:17 series:1 rkhs:5 janson:1 current:1 surprising:1 john:2 griebel:2 numerical:10 ronald:1 enables:5 plot:6 greedy:2 vanishing:4 num:1 characterization:2 provides:4 bijection:1 node:1 math:1 c6:2 daphne:2 kiel:1 dn:1 c2:2 differential:1 symposium:1 combine:1 introduce:1 x0:5 expected:2 ra:2 indeed:5 p1:1 multi:9 ming:2 eap:2 project:2 provided:3 bounded:4 notation:2 discover:1 zenger:2 lowest:1 denumerable:1 minimizes:1 guarantee:1 remember:1 returning:1 uk:2 whatever:2 unit:1 platt:2 yn:5 appear:1 modify:1 consequence:1 path:1 ap:5 approximately:1 inria:4 plus:1 acta:2 mentioning:1 unique:2 practical:1 acknowledgment:1 practice:1 differs:1 spot:3 mech:1 faber:1 empirical:7 universal:2 significantly:2 orfi:1 projection:3 regular:1 onto:1 ga:4 close:5 sheet:10 operator:4 selection:1 risk:18 applying:1 equivalent:1 supl:1 straightforward:1 independently:2 l:1 resolution:5 spanned:3 orthonormal:1 dahmen:1 handle:1 anyway:1 analogous:1 target:10 construction:4 mallat:1 us:2 element:3 satisfying:1 particularly:1 peaky:4 winograd:1 bottom:2 besov:1 decrease:1 rescaled:3 mentioned:1 benjamin:2 agency:1 complexity:5 ui:1 depend:5 ali:2 basis:8 translated:1 bungartz:1 chapter:1 soviet:1 describe:1 birkh:1 choosing:1 refined:2 whose:1 solve:3 nineteenth:1 say:2 otherwise:1 compressed:2 anr:2 gp:23 final:2 sequence:4 differentiable:1 eigenvalue:2 propose:1 product:3 fr:2 deduces:1 combining:1 relevant:1 achieve:1 multiresolution:1 roweis:2 gerard:1 generating:2 object:21 derive:2 develop:1 illustrate:1 andrew:1 ex:4 eq:1 strong:1 auxiliary:1 predicted:4 implemented:1 lognn:1 concentrate:1 direction:1 stephane:1 centered:3 enable:1 require:1 generalization:2 randomization:2 leastsquares:1 extension:3 hold:6 around:3 considered:4 sufficiently:1 normal:1 mapping:1 estimation:1 weighted:3 mit:1 gaussian:19 shrinkage:1 krzy:1 derived:2 joachim:1 rank:1 detect:1 koller:2 expand:1 france:1 interested:1 arg:4 overall:1 development:1 auser:1 mutual:1 cube:1 equal:1 aware:2 never:1 having:1 chernoff:1 represents:1 lille:1 minf:1 future:2 ephane:2 report:1 randomly:3 composed:2 national:1 zoom:2 lebesgue:2 investigate:1 possibility:1 evaluation:3 certainly:1 ouvertes:1 introduces:1 analyzed:1 edge:1 integral:2 partial:1 injective:1 il2:1 orthogonal:1 indexed:4 tree:5 walk:1 minimal:1 iberoamericana:1 logp:1 ordinary:5 cost:4 rakotomamonjy:1 subset:2 tour:1 uniform:1 johnson:1 characterize:3 supx:4 periodic:1 deduced:3 thanks:1 st:2 randomized:3 recht:2 sequel:1 michael:1 again:1 squared:1 containing:1 choose:4 summable:1 hoeffding:1 derivative:2 return:1 rescaling:1 de:1 gy:1 coefficient:6 explicitly:2 vi:2 crossed:1 performed:2 view:2 wolfgang:1 sup:6 start:1 recover:1 hf:1 parallel:1 ass:3 square:21 variance:2 characteristic:1 efficiently:1 identify:1 weak:2 identification:1 comp:1 randomness:2 whenever:4 definition:2 sixth:1 pp:2 frequency:1 obvious:1 associated:1 proof:5 recall:1 maillard:4 hilbert:3 actually:2 appears:1 follow:1 devore:2 cosi:1 correlation:1 d:1 ei:4 multiscale:1 nonlinear:1 continuity:1 defines:1 french:1 scientific:1 hal:2 believe:1 mary:1 usa:1 building:2 contain:1 xavier:1 regularization:1 moore:1 i2:1 illustrated:2 covering:1 ridge:1 motion:6 cp:2 l1:1 disorderly:1 functional:2 overview:1 cohen:1 volume:2 jl:2 extend:2 belong:1 interpretation:1 discussed:1 interpret:1 refer:3 cambridge:3 zak:1 mother:9 smoothness:4 rd:1 grid:4 canu:1 hp:2 mathematics:1 europe:1 han:1 deduce:3 base:3 saitoh:1 j:1 brownian:15 showed:2 inf:2 tikhonov:1 verlag:2 arbitrarily:4 seen:1 preserving:1 additional:2 xj0:1 signal:1 arithmetic:1 full:1 x10:1 rahimi:2 asso:1 smooth:1 technical:2 adapt:1 academic:1 concerning:1 cameron:5 controlled:3 prediction:2 variant:1 regression:30 expectation:8 albert:1 arxiv:1 kernel:21 represent:1 c1:2 preserved:1 addition:1 fine:1 source:1 concluded:1 crucial:1 posse:1 archive:1 probably:1 eigenfunctions:2 subject:1 contrary:1 identically:1 easy:2 lasso:2 restrict:1 identified:1 inner:2 reduce:1 whether:3 gb:12 penalty:1 york:1 remark:3 harlow:1 involve:1 nonparametric:1 http:1 exist:1 notice:3 sign:1 per:2 tibshirani:1 write:2 numerica:2 mat:1 key:1 threshold:2 drawn:1 longman:1 sum:1 parameterized:1 family:3 reader:1 sobolev:8 appendix:1 bound:29 def:12 annual:1 adapted:1 aspect:3 emi:3 speed:1 span:2 optimality:3 min:1 martin:5 relatively:1 combination:8 sam:2 rev:1 ln:4 equation:1 previously:1 describing:1 needed:1 know:1 singer:2 available:1 endowed:2 observe:1 progression:1 barron:1 alternative:1 rkhss:1 hat:11 rp:2 top:1 include:1 yoram:2 especially:1 ciated:1 build:2 classical:1 society:1 tensor:1 noticed:1 question:1 usual:2 traditional:1 exhibit:1 subspace:8 link:4 odalric:4 considers:5 length:1 index:8 remind:1 illustration:2 equivalently:2 difficult:1 october:1 robert:1 stoc:1 nord:1 stated:1 fluid:1 implementation:4 unknown:4 basel:1 upper:1 finite:6 incorrectly:1 reproducing:2 arbitrary:1 canada:1 introduced:3 namely:1 c3:2 c4:2 nip:2 address:1 able:3 dokl:1 below:2 usually:1 coppersmith:1 program:1 built:5 royal:1 memory:1 circumvent:1 pact:1 minimax:8 scheme:2 lk:3 nice:1 understanding:1 l2:8 multiplication:1 vancouver:1 law:2 interesting:6 limitation:1 penalization:1 degree:1 mercer:4 editor:3 translation:3 row:1 supported:1 truncation:3 free:1 alain:1 ambrym:3 munos:4 sparse:4 distributed:1 benefit:1 boundary:1 dimension:13 xn:10 lindenstrauss:1 depth:1 fb:7 computes:1 author:2 c5:2 collection:1 made:2 far:1 excess:15 approximate:3 compact:5 contradict:1 pseudoinverse:1 overfitting:1 conclude:1 assumed:2 svante:1 xi:1 alternatively:1 scrambled:20 search:1 continuous:5 hackbusch:1 learn:2 expansion:12 domain:1 did:1 dense:1 main:2 whole:2 noise:2 dyadic:1 x1:2 tl:5 en:1 ny:1 precision:1 sub:1 seminar:1 wish:1 explicit:1 lie:2 espaces:1 wavelet:37 pgp:1 theorem:11 specific:2 closeness:1 exists:1 essential:1 generalizing:1 logarithmic:1 remi:1 infinitely:3 penrose:1 lazy:6 springer:1 minimizer:3 satisfies:1 relies:2 dh:1 acm:2 goal:1 formulated:1 infinite:9 miss:1 lemma:2 called:8 total:1 explo:1 formally:1 support:3 latter:1 kohler:1 handling:1
3,283
3,974
A primal-dual algorithm for group sparse regularization with overlapping groups Silvia Villa DISI- Universit`a di Genova [email protected] Sofia Mosci DISI- Universit`a di Genova [email protected] Lorenzo Rosasco IIT - MIT [email protected] Alessandro Verri DISI- Universit`a di Genova [email protected] Abstract We deal with the problem of variable selection when variables must be selected group-wise, with possibly overlapping groups defined a priori. In particular we propose a new optimization procedure for solving the regularized algorithm presented in [12], where the group lasso penalty is generalized to overlapping groups of variables. While in [12] the proposed implementation requires explicit replication of the variables belonging to more than one group, our iterative procedure is based on a combination of proximal methods in the primal space and projected Newton method in a reduced dual space, corresponding to the active groups. This procedure provides a scalable alternative with no need for data duplication, and allows to deal with high dimensional problems without pre-processing for dimensionality reduction. The computational advantages of our scheme with respect to state-of-the-art algorithms using data duplication are shown empirically with numerical simulations. 1 Introduction Sparsity has become a popular way to deal with small samples of high dimensional data and, in a broad sense, refers to the possibility of writing the solution in terms of a few building blocks. Often, sparsity based methods are the key towards finding interpretable models in real-world problems. In particular, regularization based on `1 type penalties is a powerful approach for dealing with the problem of variable selection, since it provides sparse solutions by minimizing a convex functional. The success of `1 regularization motivated exploring different kinds of sparsity properties for (generalized) linear models, exploiting available a priori information, which restricts the admissible sparsity patterns of the solution. An example of a sparsity pattern is when the input variables are partitioned into groups (known a priori), and the goal is to estimate a sparse model where variables belonging to the same group are either jointly selected or discarded. This problem can be solved by regularizing with the group-`1 penalty, also known as group lasso penalty, which is the sum, over the groups, of the euclidean norms of the coefficients restricted to each group. A possible generalization of group lasso is to consider groups of variables which can be potentially overlapping, and the goal is to estimate a model which support is the union of groups. This is a common situation in bioinformatics (especially in the context of high-throughput data such as gene expression and mass spectrometry data), where problems are characterized by a very low number of samples with several thousands of variables. In fact, when the number of samples is not sufficient to guarantee accurate model estimation, an alternative is to take advantage of the huge amount of prior knowledge encoded in online databases such as the Gene Ontology. Largely motivated by applications in bioinformatics, a new type of penalty is proposed in [12], which is shown to give better 1 performances than simple `1 regularization. A straightforward solution to the minimization problem underlying the method proposed in [12] is to apply state-of-the-art techniques for group lasso (we recall interior-points methods [3, 20], block coordinate descent [16], and proximal methods [9, 21], also known as forward-backward splitting algorithms, among others) in an expanded space, built by duplicating variables that belong to more than one group. As already mentioned in [12], though very simple, such an implementation does not scale to large datasets, when the groups have significant overlap, and a more scalable algorithm with no data duplication is needed. For this reason we propose an alternative optimization approach to solve the group lasso problem with overlap. Our method does not require explicit replication of the features and is thus more appropriate to deal with high dimensional problems with large groups overlap. Our approach is based on a proximal method (see for example [18, 6, 5]), and two ad hoc results that allow to efficiently compute the proximity operator in a much lower dimensional space: with Lemma 1 we identify the subset of active groups, whereas in Theorem 2 we formulate the reduced dual problem for computing the proximity operator, where the dual space dimensionality coincides with the number of active groups. The dual problem can then be solved via Bertsekas? projected Newton method [7]. We recall that a particular overlapping structure is the hierarchical structure, where the overlap between groups is limited to inclusion of a descendant in its ancestors. In this case the CAP penalty [24] can be used for model selection, as it has been done in [2, 13], but ancestors are forced to be selected when any of their descendant are selected. Thanks to the nested structure, the proximity operator of the penalty term can be computed exactly in a finite number of steps [14]. This is no longer possible in the case of general overlap. Finally it is worth noting that the penalty analyzed here can be applied also to hierarchical group lasso. Differently from [2, 13] selection of ancestors is no longer enforced. The paper is organized as follows. In Section 2 we recall the group lasso functional for overlapping groups and set some notations. In Section 3 we state the main results, present a new iterative optimization procedure, and discuss computational issues. Finally in Section 4 we present some numerical experiments comparing running time of our algorithm with state-of-the-art techniques. The proofs are reported in the Supplementary material. 2 Problem and Notations We first fix some notations. Given a vector ? ? Rd , while k?k denotes the `2 -norm, we will use the P 2 1/2 notation k?kG = ( j?G ?j ) to denote the `2 -norm of the components of ? in G ? {1, . . . , d}. Then, for any differentiable function f : RB ? R, we denote by ?r f its partial derivative with respect to variables r, and by ?f = (?r f )B r=1 its gradient. We are now ready to cast group `1 regularization with overlapping groups as the following variational problem. Given a training set {(xi , yi )ni=1 } ? (X ? Y )n , a dictionary (?j )dj=1 , and B subsets of variables G = {Gr }B Gr ? {1, . . . , d}, we assume the estimator to be described by a r=1 with P d generalized linear model f (x) = j=1 ?j (x)?j and consider the following regularization scheme   1 2 ? ? = argmin E? (?) = argmin k?? ? yk + 2? ?Goverlap (?) , (1) n ??Rd ??Rd where ? is the n ? d matrix given by the features ?j in the dictionary evaluated in the training set Pn 2 points, [?]i,j = ?j (xi ). The term n1 k?? ? yk is the empirical error, n1 i=1 ` (f (xi ), yi ), when the cost function1 ` : R ? Y ? R+ is the square loss, `(f (x), y) = (y ? f (x))2 . The penalty term ?Goverlap : Rd ? R+ is lower semicontinuous, convex, and one-homogeneous, (?Goverlap(??) = ??Goverlap(?), ?? ? Rd and ? ? R+ ), and is defined as B X ?Goverlap (?) = inf kvr k . P (v1 ,...,vB ),vr ?Rd ,supp(vr)?Gr , B r=1 vr =? r=1 ?Goverlap The functional was introduced in [12] as a generalization of the group lasso penalty to allow overlapping groups, while maintaining the group lasso property of enforcing sparse solutions which support is a union of groups. When groups do not overlap, ?Goverlap reduces to the group lasso 1 Note our analysis would immediately apply to other loss functions, e.g. the logistic loss. 2 PB penalty. Note that, as pointed out in [12], using r=1 k?kGr as generalization of the group lasso penalty leads to a solution which support is the complement of the union of groups. For an extensive study of the properties of ?Goverlap, its comparison with the `1 norm, and its extension to graph lasso, we therefore refer the interested reader to [12]. 3 The GLO-pridu Algorithm If one needs to solve problem (1) for high dimensional data, the use of standard second-order methods such as interior-point methods is precluded (see for instance [6]), since they need to solve large systems of linear equations to compute the Newton steps. On the other hand, first order methods inspired to Nesterov?s seminal paper [19] (see also [18]) and based on proximal methods already proved to be a computationally efficient alternative in many machine learning applications [9, 21]. 3.1 A Proximal algorithm 2 Given the convex functional E? in (1), which is sum of a differentiable term, namely n1 k?? ? yk , and a non-differentiable one-homogeneous term 2? ?Goverlap, its minimum can be computed with following acceleration of the iterative forward-backward splitting scheme    1 T ? p = I ? ?? /?K hp ? ? (?hp ? y) n? q   cp = (1 ? tp )cp?1 , tp+1 = ?cp + c2p + 8cp /4 (2) tp+1 tp+1 ) + ? p?1 (tp ? 1) tp tp for a suitable choice of ?. Due to one-homogeneity of ?Goverlap, the proximity operator associated to ?? ?Goverlap reduces to the identity minus the projection onto the subdifferential of ?? ?Goverlap at the origin, which is a closed and convex set. We will denote such a projection as ?? /?K , where K = ??Goverlap(0). The above scheme is inspired to [10], and is equivalent to the algorithm named FISTA [5], which convergence is guaranteed, as recalled in the following theorem hp+1 = ? p (1 ? tp+1 + Theorem 1 Given ? 0 ? Rd , and ? = ||?T ?||/n, let h1 = ? 0 and t1 = 1, c0 = 1, then there exists a constant C0 such that the iterative update (10) satisfies C0 E? (? p ) ? E? (? ? ) ? 2 . (3) p As it happens for other accelerations of the basic forward-backward splitting algorithm such as [19, 6, 4], convergence of the sequence ? p is no longer guaranteed unless strong convexity is assumed. However, sacrificing theoretical convergence for speed may be mandatory in large scale applications. Furthermore, there is a strong empirical evidence that ? p is indeed convergent (see Section 4). 3.2 The projection Note that the proximity operator of the penalty ?Goverlap does not admit a closed form and must be computed approximatively. In fact the projection on the convex set K = ??Goverlap(0) = {v ? Rd , kvkGr ? 1 for r = 1, . . . , B}. cannot be decomposed group-wise, as in standard group `1 regularization, which proximity operator resolves to a group-wise soft-thresholding operator (see Eq. (9) later). Nonetheless, the following lemma shows that, when evaluating the projection, ?K , we can restrict ourselves to a subset of ? = |G| ? ? B active groups. This equivalence is crucial for speeding up the algorithm, in fact B ? is B the number of selected groups which is small if one is interested in sparse solutions. Lemma 1 Given ? ? Rd , G = {Gr }B r=1 with Gr ? {1, . . . , d}, and ? > 0, the projection onto the convex set ? K with K = {v ? Rd , kvkGr ? 1 for r = 1, . . . , B} is given by Minimize subject to 2 kv ? ?k ? v ? Rd , kvkG ? ? for G ? G. where G? := {G ? G, k?kG > ? } . 3 (4) The proof (given in the supplementary material) is based on the fact that the convex set ? K is the ? is typically much intersection of cylinders that are all centered on a coordinate subspace. Since B smaller than d, it is convenient to solve the dual problem associated to (4). Theorem 2 Given ? ? Rd , {Gr }B r=1 with Gr ? {1, . . . , d}, and ? > 0, the projection onto the convex set ? K with K = {v ? Rd , kvkGr ? ? for r = 1, . . . , B} is given by [?? K (?)]j = (1 + ?j PB? where ?? is the solution of argmax f (?), r=1 for j = 1, . . . , d with f (?) := ? ??RB + (5) ??r 1r,j ) d X j=1 1+ ??j2 PB? r=1 ? 1r,j ?r ? B X ?r ? 2 , (6) r=1 ?1, . . . , G ? ? }, and 1r,j is 1 if j belongs to group G ? r and 0 otherwise. G? = {G ? G, k?kG > ? } := {G B Equation (6) is the dual problem associated to (4), and, since strong duality holds, the minimum of (4) is equal to the maximum of the dual problem, which can be efficiently solved via Bertsekas? projected Newton method described in [7], and here reported as Algorithm 1. Algorithm 1 Projection ? Given: ? ? Rd , ?init ? RB , ? ? (0, 1), ? ? (0, 1/2),  > 0 Initialize: q = 0, ?0 = ?init ? do while (?r f (?q ) > 0 if ?qr = 0, or |?r f (?q )| >  if ?qr > 0, for r = 1, . . . , B) q := q + 1 q = min{, ||?q ? [?q ? ?f (?q )]+ ||} q I+ = {r such that 0 ? ?qr ? q , ?r f (?q ) > 0}  q q 0 if r 6= s, and r ? I+ or s ? I+ Hr,s = q ?r ?s f (? ) otherwise (7) ?(?) = [?q ? ?(H q )?1 ?f (?q )]+ m=0 n P o P q m q q q ?r f (? ) + q ?r f (? )[? ? ?r (? while f (?q ) ? f (?(? m )) ? ? ? m r?I )] do r / + r?I+ m := m + 1 end while ?q+1 = ?(? m ) end while return ?q+1 Bertsekas? iterative scheme combines the basic simplicity of the steepest descent iteration [22] with the quadratic convergence of the projected Newton?s method [8]. It does not involve the solution of a quadratic program thereby avoiding the associated computational overhead. 3.3 Computing the regularization path In Algorithm 2 we report the complete Group Lasso with Overlap primal-dual (GLO-pridu) scheme for computing the regularization path, i.e. the set of solutions corresponding to different values of the regularization parameter ?1 > . . . > ?T , for problem (1). Note that we employ the continuation strategy proposed in [11]. A similar warm starting is applied to the inner iteration, where at the p-th step ?init is determined by the solution of the (p?1)-th projection. Such an initialization empirically proved to guarantee convergence, despite the local nature of Bertsekas? scheme. 3.4 The replicates formulation An alternative way to solve the optimization problem (1) is proposed by [12], where the authors show that problem (1) is equivalent to the standard group `1 regularization (without overlap) in an expanded space built by replicating variables belonging to more than one group: 4 Algorithm 2 GLO-pridu regularization path Given: ?1 > ?2 > ? ? ? > ?T , G, ? ? (0, 1), ? ? (0, 1/2), 0 > 0, ? > 0 Let: ? = ||?T ?||/n Initialize: ?(?0 ) = 0 for t = 1, . . . , T do Initialize: ? 0 = ?(?t?1 ), ??0 = 0 while ||? p ? ? p?1 || > ?||? p?1 || do ? w = hp ? (n?)?1 ?T (?hp ? y) ? Find G? = {G ? G, kwkG ? ? } ? initialization ?? and tolerance 0 p?3/2 ? Compute ??p via Algorithm 1 with groups G, p?1 P ? B ?1 ? Compute ? p as ?jp = wj (1 + r=1 ?q+1 1 ) for j = 1, . . . , d, see Equation (5) r,j r ? Update cp , tp , and hp as in (10) end while ?(?t ) = ? p end for return ?(?1 ), . . . , ?(?T ) ) B X 1 ? ? , ? ?? ? y||2 + 2? (8) ||?|| ??? ? argmin ||? Gr n ? d? ??R r=1 ? is the matrix built by concatenating copies of ? restricted each to a certain group, i.e. where ? ? ?1, . . . , G ? B } = {[1, . . . , |G1 |], [1+|G1 |, . . . , |G1 |+|G2 |], . . . , [d?? (?j )j?G? r = (?j )j?Gr , where {G P B ? |GB |, . . . , d|]}, and d? = r=1 |Gr | is the number of total variables obtained after including the PB ? replicates. One can then reconstruct ? ? from ??? as ?j? = r=1 ?Gr (??? ), where ?Gr : Rd ? Rd maps ?? in v ? Rd , such that supp(v) ? Gr and (vj )j?Gr = (??j )j?G?r , for r = 1, . . . , B. The main advantage of the above formulation relies on the possibility of using any state-of-the-art optimization procedure for group lasso. In terms of proximal methods, a possible solution is given by Algorithm 3, where S? /? is the proximity operator of the new penalty, and can be computed exactly as     ? ? ? ?? ?r, S? /? (?) = ||?|| ??j , for j ? G for r = 1, . . . , B. (9) Gr ? + j Algorithm 3 GL-prox ? T ?||/n ? Given: ??0 ? Rd , ? > 0, ? = ||? 1 0 1 ? ? Initialize: p = 0, h = ? , t = 1 while convergence not reached do   p := p + 1 ? p ? (n?)?1 ? ? p ? y) ? T (? ?h ??p = S? /? h (10) q 1 cp = (1 ? tp )cp?1 , tp+1 = (?cp + c2p + 8cp ) 4 tp+1 tp+1 p+1 p ? ? h = ? (1 ? tp+1 + ) + ??p?1 (tp ? 1) tp tp end while return ??p ( Note that in principle, by applying Lemma 1, the group-soft-thresholding operator in (9) can be computed only on the active groups. In practice this does not yield any advantage, since the identification of the active groups has the same computational cost of the thresholding itself. 3.5 Computational issues For both GL-prox and GLO-pridu, the complexity of one iteration is the sum of the complexity of computing the gradient of the data term and the complexity of computing the proximity operator ? for GLO-pridu and GL-prox, of the penalty term. The former has complexity O(dn) and O(dn) 5 respectively, for the case n < d. One should then add at each iteration, the cost of performing the projection onto K. This can be neglected for the case of replicated variables.On the other hand, ? the time complexity of one iteration for Algorithm 1 is driven by the number of active groups B. This number is typically small when looking for sparse solutions. The complexity is thus given ??B ? matrix H, O(B ? 3 ), and the by the sum of the complexity of evaluating the inverse of the B ? 2 ). The worst case complexity would then complexity of performing the product H ?1 ?g(?), O(B 3 ? be O(B ). Nevertheless, in practice the complexity is much lower because matrix H is highly sparse. In fact, Equation (7) tells us that the part of matrix H corresponding to the active set I+ is ?=B ?? + B ?+ , where B ?? is the number of non active constraints, diagonal. As a consequence, if B ? and B+ is the number of active constraints, then the complexity of inverting matrix H is at most 3 ?+ ) + O(B ?? ?? ? B ?? non diagonal part of matrix H is highly sparse, since O(B ). Furthermore the B ?r ? G ? s = ? and the complexity of inverting it is in practice much lower than O(B ? 3 ). Hr,s = 0 if G ? 3 ? ? The worst case complexity for computing the projection onto K is thus O(q ? B+ ) + O(q ? B? ), where q is the number of iterations necessary to reach convergence. Note that even if, in order to guarantee convergence, the tolerance for evaluating convergence of the inner iteration must decrease with the number of external iterations, in practice, thanks to warm starting, we observed that q is rarely greater than 10 in the experiments presented here. Concerning the number of iterations required to reach convergence for GL-prox in the replicates formulation, we empirically observed that it requires a much higher number of iterations than GLOpridu (see Table 3). We argue that such behavior is due to the combination of two occurences: 1) the ? is 0 even if ? is locally well conditioned, 2) the decomposition local condition number of matrix ? ? ? ? of ? as ? is possibly not unique, which is required in order to have a unique solution for (8). ? In fact, since E? is convex but not The former is due to the presence of replicated columns in ?. necessarily strictly convex ? as when n < d ?, uniqueness and convergence is not always guaranteed unless some further assumption is imposed. Most convergence results relative to `1 regularization link uniqueness of the solution as well as the rate of convergence of the Soft Thresholding Iteration to some measure of local conditioning of the Hessian of the differentiable part of E? (see for instance Proposition 4.1 in [11], where the Hessian restricted to the set of relevant variables is required to ? so that, if the relevant ? = 1/n? ? T ?, be full rank). In our case the Hessian for GL-prox is simply H ? groups have non null intersection, then H restricted to the set of relevant variables is by no means full rank. Concerning the latter argument, we must say that in many real world problems, such as bioinformatics, one cannot easily verify that the solution indeed has a unique decomposition. In fact, we can think of trivial examples where the replicates formulation has not a unique solution. 4 Numerical Experiments In this section we present numerical experiments aimed at comparing the running time performance of GLO-pridu with state-of-the-art algorithms. To ensure a fair comparison, we first run some preliminary experiments to identify the fastest codes for group `1 regularization with no overlap. We refer to [6] for an extensive empirical and theoretical comparison of different optimization procedures for solving `1 regularization. Further empirical comparisons can be found in [15]. 4.1 Comparison of different implementations for standard group lasso We considered three algorithms which are representative of the optimization techniques used to solve group lasso: interior-point methods, (group) coordinate descent and its variations, and proximal methods. As an instance of the first set of techniques we employed the publicly available Matlab code at http://www.di.ens.fr/?fbach/grouplasso/index.htm described in [1]. For coordinate descent methods, we employed the R-package grlplasso, which implements block coordinate gradient descent minimization for a set of possible loss functions. In the following we will refer to these two algorithms as ??GL-IP? and ?GL-BCGD?. Finally we use our Matlab implementation of Algorithm GL-prox as an instance of proximal methods. We first observe that the solutions of the three algorithms coincide up to an error which depends on each algorithm tolerance. We thus need to tune each tolerance in order to guarantee that all iterative algorithms are stopped when the level of approximation to the true solution is the same. 6 Table 1: Running time (mean and standard deviation) in seconds for computing the entire regularization path of GL-IP, GL-BCGD, and GL-prox for different values of B, and n. For B = 1000, GL-IP could not be computed due to memory reasons. n = 100 n = 500 n = 1000 GL-IP GL-BCGD GL-prox GL-IP GL-BCGD GL-prox B = 10 5.6 ? 0.6 2.1 ? 0.6 0.21 ? 0.04 B = 10 2.30 ? 0.27 2.15 ? 0.16 0.1514 ? 0.0025 GL-IP GL-BCGD GL-prox B = 10 1.92 ? 0.25 2.06 ? 0.26 0.182 ? 0.006 B = 100 60 ? 90 2.8 ? 0.6 2.9 ? 0.4 B = 1000 ? 14.4 ? 1.5 183 ? 19 B = 100 370 ? 30 4.7 ? 0.5 2.54 ? 0.16 B = 100 328 ? 22 18 ? 3 4.7 ? 0.5 B = 1000 ? 16.5 ? 1.2 109 ? 6 B = 1000 ? 20.6 ? 2.2 112 ? 6 Toward this end, we run Algorithm GL-prox with machine precision, ? = 10?16 , in order to have a good approximation of the asymptotic solution. We observe that for many values of n and d, and over a large range of values of ? , the approximation of GL-prox when ? = 10?6 is of the same order of the approximation of GL-IP with optparam.tol = 10?9 , and of GL-BCGD with tol = 10?12 . Note also that with these tolerances the three solutions coincide also in terms of selection, i.e. their supports are identical for each value of ? . Therefore the following results correspond to optparam.tol = 10?9 for GL-IP, tol = 10?12 for GL-BCGD, and ? = 10?6 for GL-prox. For the other parameters of GL-IP we used the values used in the demos supplied with the code. Concerning the data generation protocol, the input variables x = (x1 , . . . , xd ) are uniformly drawn from [?1, 1]d . The labels y are computed using a noise-corrupted linear regression function, i.e. y = ? ? x + w, where ? depends on the first 30 variables, ?j = 1 if j = 1, . . . , 30, and 0 otherwise, w is an additive gaussian white noise, and the signal to noise ratio is 5:1. In this case the dictionary coincides with the variables, ?j (x) = xj for j = 1, . . . , d. We then evaluate the entire regularization path for the three algorithms with B sequential groups of 10 variables, (G1 =[1, . . . , 10], G2 =[11, . . . , 20], and so on), for different values of n and B. In order to make sure that we are working on the correct range of values for the parameter ? , we first evaluate the set of solutions of GL-prox corresponding to a large range of 500 values for ? , with ? = 10?4 . We then determine the smallest value of ? which corresponds to selecting less than n variables, ?min , and the smallest one returning the null solution, ?max . Finally we build the geometric series of 50 values between ?min and ?max , and use it to evaluate the regularization path on the three algorithms. In order to obtain robust estimates of the running times, we repeat 20 times for each pair n, B. In Table 1 we report the computational times required to evaluate the entire regularization path for the three algorithms. Algorithms GL-BCGD and GL-prox are always faster than GL-IP which, due to memory reasons, cannot by applied to problems with more than 5000 variables, since it requires to store the d ? d matrix ?T ? ?. It must be said that the code for GP-IL was made available mainly in order to allow reproducibility of the results presented in [1], and is not optimized in terms of time and memory occupation. However it is well known that standard second-order methods are typically precluded on large data sets, since they need to solve large systems of linear equations to compute the Newton steps. GL-BCGD is the fastest for B = 1000, whereas GL-prox is the fastest for B = 10, 100. The candidates as benchmark algorithms for comparison with GLO-pridu are GL-prox and GL-BCGD. Nevertheless we observed that, when the input data matrix contains a significant fraction of replicated columns, this algorithm does not provide sparse solutions. We therefore compare GLO-pridu with GL-prox only. 4.1.1 Projection vs duplication The data generation protocol is equal to the one described in the previous experiments, but ? depends on the first 12/5b variables (which correspond to the first three groups) ? = ( c, . . . , c , 0, 0, . . . , 0). {z } | {z } | b?12/5 times d?b?12/5 times 7 We then define B groups of size b, so that d? = B ? b > d. The first three groups correspond to the subset of relevant variables, and are defined as G1 = [1, . . . , b], G2 = [4/5b + 1, . . . , 9/5b], and G3 = [1, . . . , b/5, 8/5b + 1, . . . , 12/5b], so that they have a 20% pair-wise overlap. The remaining B ? 3 groups are built by randomly drawing sets of b indexes from [1, d]. In the following we will let n = 10|G1 ? G2 ? G3 |, i.e. n is ten times the number of relevant variables, and vary d, b. We also vary the number of groups B, so that the dimension of the expanded space is ? times the input dimension, d? = ?d, with ? = 1.2, 2, 5. Clearly this amounts to taking B = ? ? d/b. The parameter ? can be thought of as the average number of groups a single variable belongs to. We identify the correct range of values for ? as in the previous experiments, using GLO-pridu with loose tolerance, and then evaluate the running time and the number of iterations necessary to compute the entire regularization path for GL-prox on the expanded space and GLO-pridu, both with ? = 10?6 . Finally we repeat 20 times for each combination of the three parameters d, b, and ?. Table 2: Running time (mean ? standard deviation) in seconds for b = 10 (top), and b = 100 (below). For each d and ?, the left and right side correspond to GLO-pridu, and GL-prox, respectively. d =1000 d =5000 d =10000 d =1000 d =5000 d =10000 ? = 1.2 0.15 ? 0.04 0.20 ? 0.09 1.1 ? 0.4 1.0 ? 0.6 2.1 ? 0.7 2.1 ? 1.4 ?=2 1.6 ? 0.9 5.1 ? 2.0 1.55 ? 0.29 2.4 ? 0.7 3.0 ? 0.6 4.5 ? 1.4 ?=5 12.4 ? 1.3 68 ? 8 103 ? 12 790 ? 57 460 ? 110 2900 ? 400 ? = 1.2 11.7 ? 0.4 24.1 ? 2.5 31 ? 13 38 ? 15 16.6 ? 2.1 13 ? 3 ?=2 11.6 ? 0.4 42 ? 4 90 ? 5 335 ? 21 90 ? 30 270 ? 120 ?=5 13.5 ? 0.7 1467 ? 13 85 ? 3 1110 ? 80 296 ? 16 ? Table 3: Number of iterations (mean ? standard deviation) for b = 10 (top) and b = 100 (below). For each d and ?, the left and right side correspond to GLO-pridu, and GL-prox, respectively. d =1000 d =5000 d =10000 ? = 1.2 100 ? 30 80 ? 30 100 ? 40 70 ? 30 100 ? 30 70 ? 40 ?=2 1200 ? 500 1900 ? 800 148 ? 25 139 ? 24 160 ? 30 137 ? 26 d =1000 d =5000 d =10000 ? = 1.2 913 ? 12 2160 ? 210 600 ? 400 600 ? 300 81 ? 11 63 ? 11 ?=5 2150 ? 160 11000 ? 1300 6600 ? 500 27000 ? 2000 13300 ? 1900 49000 ? 6000 ?=2 894 ? 11 2700 ? 300 1860 ? 110 4590 ? 290 1000 ? 500 1800 ? 900 ?=5 895 ? 10 4200 ? 400 1320 ? 30 6800 ? 500 2100 ? 60 ? Running times and number of iterations are reported in Table 2 and 3, respectively. When the degree of overlap ? is low the computational times of GL-prox and GLO-pridu are comparable. As ? increases, there is a clear advantage in using GLO-pridu instead of GL-prox. The same behavior occurs for the number of iterations. 5 Discussion We have presented an efficient optimization procedure for computing the solution of group lasso with overlapping groups of variables, which allows dealing with high dimensional problems with large groups overlap. We have empirically shown that our procedure has a great computational advantage with respect to state-of-the-art algorithms for group lasso applied on the expanded space built by replicating variables belonging to more than one group. We also mention that computational performance may improve if our scheme is used as core for the optimization step of active set methods, such as [23]. Finally, as shown in [17], the improved computational performance enables to use group `1 regularization with overlap for pathway analysis of high-throughput biomedical data, since it can be applied to the entire data set and using all the information present in online databases, without pre-processing for dimensionality reduction. 8 References [1] F. Bach. Consistency of the group lasso and multiple kernel learning. Journal of Machine Learning Research, 9:1179?1225, 2008. [2] F. Bach. High-dimensional non-linear variable selection through hierarchical kernel learning. Technical Report HAL 00413473, INRIA, 2009. [3] F. R. Bach, G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the smo algorithm. In ICML, volume 69 of ACM International Conference Proceeding Series, 2004. [4] A. Beck and Teboulle. M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Transactions on Image Processing, 18(11):2419? 2434, 2009. [5] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci., 2(1):183?202, 2009. [6] S. Becker, J. Bobin, and E. Candes. Nesta: A fast and accurate first-order method for sparse recovery, 2009. [7] D. Bertsekas. Projected newton methods for optimization problems with simple constraints. SIAM Journal on Control and Optimization, 20(2):221?246, 1982. [8] R. Brayton and J. Cullum. An algorithm for minimizing a differentiable function subject to. J. Opt. Th. Appl., 29:521?558, 1979. [9] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research, 10:28992934, December 2009. [10] O. Guler. New proximal point algorithm for convex minimization. SIAM J. on Optimization, 2(4):649?664, 1992. [11] E. T. Hale, W. Yin, and Y. Zhang. Fixed-point continuation for l1-minimization: Methodology and convergence. SIOPT, 19(3):1107?1130, 2008. [12] L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and graph lasso. In ICML, page 55, 2009. [13] R. Jenatton, J.-Y . Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. Technical report, INRIA, 2009. [14] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In Proceeding of ICML 2010, 2010. [15] I. Loris. On the performance of algorithms for the minimization of l1 -penalized functionals. Inverse Problems, 25(3):035008, 16, 2009. [16] L. Meier, S. van de Geer, and P. Buhlmann. The group lasso for logistic regression. J. R. Statist. Soc, B(70):53?71, 2008. [17] S. Mosci, S. Villa, Verri A., and L. Rosasco. A fast algorithm for structured gene selection. presented at MLSB 2010, Edinburgh. [18] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o(1/k 2 ). Doklady AN SSSR, 269(3):543?547, 1983. [19] Y. Nesterov. Smooth minimization of non-smooth functions. 103(1):127?152, 2005. Math. Prog. Series A, [20] M. Y. Park and T. Hastie. L1-regularization path algorithm for generalized linear models. J. R. Statist. Soc. B, 69:659?677, 2007. [21] L. Rosasco, M. Mosci, S. Santoro, A. Verri, and S. Villa. Iterative projection methods for structured sparsity regularization. Technical Report MIT-CSAIL-TR-2009-050, MIT, 2009. [22] J. Rosen. The gradient projection method for nonlinear programming, part i: linear constraints. J. Soc. Ind. Appl. Math., 8:181?217, 1960. [23] V. Roth and B. Fischer. The group-lasso for generalized linear models: uniqueness of solutions and efficient algorithms. In Proceedings of 25th ICML, 2008. [24] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical variable selection. Annals of Statistics, 37(6A):3468?3497, 2009. 9
3974 |@word norm:5 c0:3 semicontinuous:1 simulation:1 decomposition:2 jacob:1 mention:1 thereby:1 minus:1 tr:1 reduction:2 series:3 contains:1 selecting:1 nesta:1 comparing:2 must:5 additive:1 numerical:4 enables:1 interpretable:1 update:2 v:1 selected:5 steepest:1 core:1 provides:2 math:2 zhang:1 dn:2 become:1 replication:2 descendant:2 combine:1 overhead:1 pathway:1 bobin:1 mosci:4 indeed:2 behavior:2 ontology:1 inspired:2 decomposed:1 resolve:1 underlying:1 notation:4 mass:1 null:2 kg:3 kind:1 argmin:3 finding:1 guarantee:4 disi:5 duplicating:1 xd:1 exactly:2 universit:3 returning:1 doklady:1 dima:1 control:1 bertsekas:5 t1:1 local:3 consequence:1 despite:1 path:9 inria:2 initialization:2 equivalence:1 appl:2 fastest:3 limited:1 range:4 unique:4 union:3 block:3 practice:4 implement:1 procedure:8 empirical:4 thought:1 vert:1 projection:14 convenient:1 pre:2 composite:1 refers:1 onto:5 interior:3 selection:9 operator:10 cannot:3 context:1 applying:1 writing:1 seminal:1 www:1 equivalent:2 map:1 imposed:1 roth:1 straightforward:1 starting:2 convex:12 formulate:1 simplicity:1 splitting:4 immediately:1 recovery:1 occurences:1 estimator:1 rocha:1 coordinate:5 variation:2 annals:1 programming:1 homogeneous:2 deblurring:1 origin:1 lanckriet:1 database:2 observed:3 solved:3 worst:2 thousand:1 wj:1 decrease:1 yk:3 alessandro:1 mentioned:1 convexity:1 complexity:13 nesterov:3 neglected:1 solving:2 easily:1 htm:1 differently:1 iit:1 forced:1 fast:4 tell:1 unige:3 encoded:1 solve:7 supplementary:2 say:1 drawing:1 otherwise:3 reconstruct:1 statistic:1 fischer:1 g1:6 think:1 jointly:1 itself:1 gp:1 ip:10 online:3 hoc:1 advantage:6 differentiable:5 sequence:1 propose:2 product:1 fr:1 j2:1 relevant:5 reproducibility:1 inducing:1 kv:1 qr:3 exploiting:1 convergence:15 eq:1 strong:3 soc:3 brayton:1 sssr:1 correct:2 centered:1 material:2 require:1 fix:1 generalization:3 preliminary:1 proposition:1 opt:1 exploring:1 extension:1 strictly:1 hold:1 proximity:8 considered:1 great:1 dictionary:4 vary:2 smallest:2 uniqueness:3 estimation:1 label:1 grouped:1 minimization:7 mit:4 clearly:1 always:2 gaussian:1 pn:1 shrinkage:1 rank:2 mainly:1 sense:1 typically:3 entire:5 santoro:1 ancestor:3 interested:2 issue:2 dual:9 among:1 priori:3 art:6 constrained:1 initialize:4 equal:2 identical:1 broad:1 park:1 icml:4 throughput:2 yu:1 rosen:1 others:1 report:5 few:1 employ:1 randomly:1 homogeneity:1 beck:2 argmax:1 ourselves:1 n1:3 cylinder:1 huge:1 possibility:2 highly:2 replicates:4 analyzed:1 primal:3 accurate:2 partial:1 necessary:2 unless:2 euclidean:1 sacrificing:1 theoretical:2 stopped:1 instance:4 column:2 soft:3 teboulle:2 tp:17 cost:3 deviation:3 subset:4 gr:15 reported:3 corrupted:1 proximal:10 thanks:2 international:1 siam:3 csail:1 rosasco:3 possibly:2 bcgd:10 admit:1 external:1 derivative:1 zhao:1 return:3 supp:2 prox:23 de:1 coefficient:1 audibert:1 ad:1 depends:3 later:1 h1:1 closed:2 reached:1 candes:1 minimize:1 square:1 ni:1 publicly:1 il:1 largely:1 efficiently:2 yield:1 identify:3 correspond:5 identification:1 worth:1 reach:2 nonetheless:1 associated:4 proof:2 di:4 proved:2 popular:1 recall:3 knowledge:1 cap:1 dimensionality:3 organized:1 jenatton:2 higher:1 methodology:1 improved:1 verri:4 formulation:4 done:1 though:1 evaluated:1 furthermore:2 biomedical:1 hand:2 working:1 nonlinear:1 overlapping:9 logistic:2 hal:1 building:1 verify:1 true:1 former:2 regularization:23 deal:4 white:1 ind:1 coincides:2 generalized:5 complete:1 kwkg:1 duchi:1 cp:9 l1:3 image:2 wise:4 variational:1 common:1 functional:4 empirically:4 function1:1 jp:1 conditioning:1 volume:1 belong:1 significant:2 refer:3 rd:18 unconstrained:1 consistency:1 hp:6 inclusion:1 pointed:1 loris:1 replicating:2 dj:1 longer:3 glo:14 add:1 lrosasco:1 inf:1 belongs:2 driven:1 mandatory:1 store:1 certain:1 success:1 yi:2 minimum:2 greater:1 employed:2 determine:1 signal:1 full:2 multiple:2 reduces:2 smooth:2 technical:3 faster:1 characterized:1 bach:5 concerning:3 scalable:2 basic:2 regression:2 iteration:15 kernel:3 spectrometry:1 whereas:2 subdifferential:1 crucial:1 fbach:1 sure:1 duplication:4 subject:2 december:1 jordan:1 noting:1 presence:1 xj:1 hastie:1 lasso:23 restrict:1 inner:2 motivated:2 expression:1 gb:1 becker:1 penalty:16 hessian:3 matlab:2 tol:4 clear:1 involve:1 aimed:1 tune:1 amount:2 locally:1 ten:1 statist:2 reduced:2 continuation:2 http:1 supplied:1 restricts:1 rb:3 group:76 key:1 nevertheless:2 pb:4 drawn:1 backward:4 v1:1 imaging:1 graph:2 fraction:1 sum:4 enforced:1 run:2 inverse:3 package:1 powerful:1 named:1 prog:1 family:1 reader:1 genova:3 vb:1 comparable:1 guaranteed:3 convergent:1 quadratic:2 constraint:4 speed:1 argument:1 min:3 performing:2 expanded:5 structured:3 combination:3 belonging:4 smaller:1 partitioned:1 g3:2 happens:1 restricted:4 computationally:1 equation:5 discus:1 loose:1 needed:1 singer:1 end:6 available:3 apply:2 observe:2 hierarchical:5 appropriate:1 alternative:5 batch:1 denotes:1 running:7 ensure:1 remaining:1 top:2 maintaining:1 newton:7 especially:1 build:1 already:2 occurs:1 strategy:1 diagonal:2 villa:4 said:1 gradient:5 subspace:1 link:1 sci:1 argue:1 trivial:1 reason:3 enforcing:1 toward:1 code:4 index:2 grouplasso:1 minimizing:2 ratio:1 potentially:1 implementation:4 datasets:1 discarded:1 benchmark:1 finite:1 descent:5 situation:1 looking:1 buhlmann:1 introduced:1 complement:1 cast:1 namely:1 inverting:2 extensive:2 required:4 pair:2 optimized:1 meier:1 recalled:1 smo:1 precluded:2 below:2 pattern:2 sparsity:7 kgr:1 program:1 built:5 including:1 memory:3 max:2 overlap:14 suitable:1 warm:2 regularized:1 hr:2 scheme:8 improve:1 lorenzo:1 conic:1 ready:1 speeding:1 prior:1 geometric:1 relative:1 asymptotic:1 occupation:1 loss:4 generation:2 degree:1 sufficient:1 thresholding:5 principle:1 penalized:1 gl:43 repeat:2 copy:1 side:2 allow:3 taking:1 absolute:1 sparse:11 siopt:1 tolerance:6 van:1 edinburgh:1 dimension:2 world:2 evaluating:3 c2p:2 forward:4 author:1 made:1 projected:5 replicated:3 coincide:2 transaction:1 functionals:1 gene:3 dealing:2 active:11 mairal:1 assumed:1 xi:3 demo:1 cullum:1 iterative:8 table:6 nature:1 robust:1 init:3 necessarily:1 protocol:2 vj:1 main:2 silvia:1 noise:3 sofia:1 fair:1 x1:1 representative:1 en:1 vr:3 precision:1 explicit:2 concatenating:1 candidate:1 admissible:1 theorem:4 hale:1 evidence:1 exists:1 sequential:1 conditioned:1 intersection:2 yin:1 simply:1 kvr:1 g2:4 approximatively:1 nested:1 corresponds:1 satisfies:1 relies:1 acm:1 obozinski:2 goal:2 identity:1 acceleration:2 towards:1 fista:1 determined:1 uniformly:1 denoising:1 lemma:4 total:2 geer:1 duality:2 rarely:1 support:4 latter:1 bioinformatics:3 evaluate:5 regularizing:1 avoiding:1
3,284
3,975
Empirical Risk Minimization with Approximations of Probabilistic Grammars Noah A. Smith Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Shay B. Cohen Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Abstract Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of the parameters of a fixed probabilistic grammar using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting. 1 Introduction Probabilistic grammars are an important statistical model family used in natural language processing [7], computer vision [16], computational biology [19] and more recently, in human activity analysis [12]. They are commonly estimated using maximum likelihood estimate or variants. Such estimation can be viewed as minimizing empirical risk with the log-loss [21]. The log-loss is not bounded when applied to probabilistic grammars, and that makes it hard to obtain uniform convergence results. Such results would help in deriving sample complexity bounds, that is, bounds on the number of training examples required to obtain accurate estimation. To overcome this problem, we derive distribution-dependent uniform convergence results for probabilistic grammars. In that sense, our learning framework relates to previous work about learning in a distribution-dependent setting [15] and structural risk minimization [21]. Our work is also related to [8], which discusses the statistical properties of estimation of parsing models in a distribution-free setting. Based on the notion of bounded approximations [1, 9], we define a sequence of increasingly better approximations for probabilistic grammars, which we call ?proper approximations.? We then derive sample complexity bounds in our framework, for both the supervised case and the unsupervised case. Our results rely on an exponential decay in probabilities with respect to the length of the derivation (number of derivation steps the grammar takes when generating a structure). This means that most of the probability mass for such a distribution is concentrated on a small number of grammatical derivations. We formalize this notion, and use it in many of our results. For applications involving real-world data of finite size (as in natural language processing, computational biology, and so on), we believe this is a reasonable assumption. The rest of the paper is organized as follows. ?2 gives an overview of probabilistic grammars. ?3 gives an overview of the learning setting. ?4 presents proper approximations, which are approximate concept spaces that permit the derivation of sample complexity bounds for probabilistic grammars. ?5 describes the main sample complexity results. We discuss our results in ?6 and conclude in ?7. 1 2 Probabilistic Grammars A probabilistic grammar defines a probability distribution over grammatical derivations generated through a step-by-step process. For example, probabilistic context-free grammars (PCFGs) generate phrase-structure trees by recursively rewriting nonterminal symbols as sequences of ?child? symbols according to a fixed set of production rules. Each rewrite of a PCFG is conditionally independent of previous ones given one PCFG state; this Markov property permits efficient inference for the probability distribution defined by the probabilistic grammar. In this paper, we will assume that any grammatical derivation z fully determines a string x, denoted yield(z). There may be many derivations z for a given string (perhaps infinitely many for some kinds of grammars; we assume that the number of derivations is finite). In general, a probabilistic grammar defines the probability of a grammatical derivation z as: QK QNk ?k,i (z) PK PNk (1) h? (z) = = exp k=1 i=1 ?k,i (z) log ?k,i k=1 i=1 ?k,i ?k,i is a function that ?counts? the number of times the kth distribution?s ith event occurs in the derivation. The ? are a collection of K multinomials h? 1 , ..., ? K i, the kth of which includes Nk PK events. We let N = k=1 Nk denote the total number of derivation event types. D(G) denotes the set of all possible derivations of G. We define Dx (G) = {z ? D(G) | yield(z) = x}. We let PK PNk ?k,i (z) denote the ?length? (number of |x| denote the length of the string x, and |z| = k=1 i=1 event tokens) of the derivation z. Parameter estimation for probabilistic grammars means choosing ? from complete data (?supervised?) or incomplete data (?semi-supervised? or ?unsupervised,? the latter usually implying that strings x are evidence but all derivations z are missing). We can view parameter estimation as identifying a hypothesis from H(G) = {h? (z) | ?} or, equivalently, from F(G) = {? log h? (z) | ?}. For simplicity of notation, we assume that there is a fixed grammar and use H to refer to H(G) and F to refer to F(G).1 For every f? ? F we have parameters ? such that PK PNk f? (z) = ? k=1 i=1 ?k,i (z) log ?k,i . We will make a few assumptions about G and P(z), the distribution that generates derivations from D(G) (note that P does not have to be a probabilistic grammar): ? Bounded derivation length: There is an ? ? 1 such that, for all z, |z| ? ?|yield(z)|. Further, |z| ? |x|. ? Exponential decay of derivations: There is a constant r < 1 and a constant L ? 0 such that P(z) ? Lr|z| . ? Exponential decay of strings: Let ?(k) = |{z ? D(G) | |z| = k}| be the number derivations of length k in G. Taking r as above, then we assume there exists a constant q < 1, such that ?(k)rk ? q k . This implies that the number of derivations of length k may be exponentially large (e.g., as with many PCFGs), but is bounded by (q/r)k . ? Bounded expectations of rules: There is a B < ? such that E[?k,i (z)] ? B for all k and i. We note that, for example, these assumptions must hold for any P whose support consists of a finite set. These assumptions also hold in many cases when P itself is a probabilistic grammar. See supplementary material for a note about these assumptions, their empirical justification and the relationship to Tsybakov noise [20, 15]. 3 The Learning Setting In the supervised learning setting, a set of grammatical derivations z1 , . . . , zn is used to estimate ?, implying a choice of h ? H that ?agrees? with the training data. MLE chooses h? ? H to maximize the likelihood of the data: n X 1X ? h? = argmax log h(zi ) = argmin P(z) (? log h(z)) (2) h?H n i=1 h?H z?D(G) | {z } Remp,n (? log h) 1 Learning the rules in a grammar is another important problem that has received much attention [11]. 2 As shown, this equates to minimizing the empirical risk, or the expected value of a particular loss function known as log-loss. The expected risk, under P, is the (unknowable) quantity X R(? log h) = P(z) (? log h(z)) = EP [? log h] z?D(G) Showing convergence of the form suph?H |Remp,n (? log h) ? R(? log h)| ?? 0 (in probability), n?? is referred to as double-sided uniform convergence. (We note that suph?H |Remp,n (? log h) ? R(? log h)| = supf ?F |Remp,n (f ) ? R(f )|). This kind of uniform convergence is the driving force in showing that the empirical risk minimizer is consistent, i.e., the minimized empirical risk converges to the minimized expected risk. We assume familiarity with the relevant literature about empirical risk minimization; see [21]. 4 Proper Approximations The log-loss is unbounded, so that there is no function F : D(G) ? R such that, ?f ? F, ?z ? D(G), f (z) ? F (z); i.e., there is no envelope to uniformly bound F. This makes it difficult to obtain a uniform convergence result of supf ?F |Remp,n (f ) ? R(f )|. Vapnik [21, page 93] shows that we can still get consistency for the maximum likelihood estimator, if we bound from below and above the family of probability distributions at hand. Instead of making this restriction, which is heavy for probabilistic grammars, we revise the learning model according to well-known results about the convergence of stochastic processes. The revision approximates the concept space using a sequence F1 , F2 , . . . and replaces two-sided uniform convergence with convergence on the sequence of concept spaces. The concept spaces in the sequence vary as a function of the number of samples we have. We next construct the sequence of concept spaces, and in ?5 we return to the learning model. Our approximations are based on the concept of bounded approximations [1, 9]. Let Fm (for m ? {1, 2, . . .}) be a sequence of concept spaces contained in F. We will require that as m grows larger, Fm becomes a better approximation of the original concept space F. We say that the sequence ?properly approximates? F if there exists a non-increasing function tail (m) such that tail (m) ?? 0, a non-increasing function bound (m) such that bound (m) ?? 0, and an operator m?? Cm : F ? Fm such that for all m larger than some M : m?? Containment: Fm ? F   Boundedness: ?K ?m ? 0, ?f ? Fm , E |f | ? I(|f | ? Km ) ? ? bound (m) o [ n Tightness: P? z | Cm (f )(z) ? f (z) ? tail (m) ? ? tail (m) f ?F The second requirement bounds the expected values of Fm on values larger than Km . This is required to obtain uniform convergence results in the revised model [18]. Note that Km can grow arbitrarily large. The third requirement ensures that our approximation actually converges to the original concept space F. We will show in ?4.2 this is actually a well-motivated characterization of convergence for probabilistic grammars in the supervised setting. We note that a good approximation would have Km increasing fast as a function of m and tail (m) and bound (m) decreasing fast as a function of m. As we will see in ?5, we cannot have an arbitrarily fast convergence rate (by, for example, taking a subsequence of Fm ), because the size of Km has a great effect on the number of samples required to obtain accurate estimation. 4.1 Constructing Proper Approximations for Probabilistic Grammars We now focus on constructing proper approximations for probabilistic grammars. We make an assumption about the probabilistic grammar that ?k, Nk = 2. For most common grammar formalisms, this does not change the expressive power: any grammar that can be expressed using Nk > 2 can be expressed using a grammar that has Nk ? 2. See supplementary material and [6]. 3 We now construct Fm . For each f ? F we define the transformation T (f, ?) that shifts every ? k = h?k,1 , ?k,2 i in the probabilistic grammar by ?: ( h?, 1 ? ?i if ?k,1 < ? h1 ? ?, ?i if ?k,1 > 1 ? ? (3) h?k,1 , ?k,2 i ? h?k,1 , ?k,2 i otherwise Note that T (f, ?) ? F for any ? ? 1/2. Fix a constant p > 1. For each m ? N, define Fm = {T (f, m?p ) | f ? F}. Proposition 4.1. There exists a constant ? = ?(L, q, p, N ) > 0 such that Fm has the boundedness property with Km = pN log3 m and bound (m) = m?? log m . 2 Proof. for all z ? Z(m) we have f (z) = P Let f ? Fm . Let Z(m) P = {z | |z| ? log m}. Then, 3 ? i,k ?(k, i) log ?k,i ? ?(k, i)(p log m) ? pN log m = Km , where the first inequality i,k ?p follows from f ? Fm (?k,i ? m ) and the second from |z| ? log2 m. In addition, from the requirements on P we have: h i P    k log2 m E |f | ? I(|f | ? Km ) ? pN log m L?(k)r k ? ? log m q 2 k>log m for some constant  ? >0. Finally, for some ?(L, q, p, N ) = ? > 0 and some constant M , if m > M then ? log m q log 2 m ? m?? log m . N log2 m : mp ? 1 such that for any m > We show now that Fm is tight with respect to F with tail (m) = Proposition 4.2. There exists an M S  P f ?F {z | Cm (f )(z) ? f (z) ? tail (m)} ? Cm (f ) = T (f, m?p ). tail (m) for tail (m) = M we have: N log2 m and mp ? 1 Proof. See supplementary material. We now have proper approximations for probabilistic grammars. From this point, we use Fm to denote the proper approximation constructed for G. We use bound (m) and tail (m) as in Proposition 4.1 and Proposition 4.2, and assume that p > 1 is fixed, for the rest of the paper. 4.2 Asymptotic Empirical Risk Minimization It would be compelling to know that the empirical risk minimizer over Fn is an asymptotic empirical risk minimizer (in the log-loss case, this means it converges to the maximum likelihood estimate). As a conclusion to this section about proper approximations, we motivate the three requirements that we posed on proper approximations by showing that this is indeed true. We now unify n, the number of samples, and m, the index of the approximation of the concept space F. Let fn? be the minimizer of the empirical risk over F, (fn? = argminf ?F Remp,n (f )) and let gn be the minimizer of the empirical risk over Fn (gn = argminf ?Fn Remp,n (f )). Let D = {z1 , ..., zn } be a sample from P(z). The operator (gn =) argminf ?Fn Remp,n (f ) is an asymptotic empirical risk minimizer if E[Remp,n (gn ) ? Remp,n (fn? )] ? 0. Then, we have the following: Proposition 4.3. Let D = {z1 , ..., zn } be a sample of derivations for G. Then gn = argminf ?Fn Remp,n (f ) is an asymptotic empirical risk minimizer. S Lemma 4.4. Denote by Z,n the set f ?F {z | Cn (f )(z) ? f (z) ? }. Denote by A,n the event ?one of zi ? D is in Z,n .? Then if Fn properly approximates F then: E [Remp,n (gn ) ? Remp,n (fn? )] (4)     ? ? ? E Remp,n (Cn (fn )) | A,n P(A,n ) + E Remp,n (fn ) | A,n P(A,n ) + tail (n) where the expectations are taken with respect to the dataset D. (See the supplementary material for a proof.) 4 Proof of Proposition 4.3. Let f0 ? F be the concept that puts uniform weights over ?, i.e., ? k = h 21 , 12 i for all k. Note that |E[Remp,n (fn? ) | A,n ]|P(A,n ) Pn P ? |E[Remp,n (f0 ) | A,n ]|P(A,n ) = logn 2 l=1 k,i E[?k,i (zl ) | A,n ]P(A,n ) (5) S Let Aj,,n for j ? {1, . . . , n} be the event ?zj ? Z,n ?. Then A,n = j Aj,,n . We have that: P P E[?k,i (zl ) | A,n ]P(A,n ) ? j zl P(zl , Aj,,n )|zl | (6) P P P ? (7) j6=l zl P(zl )P(Aj,,n )|zl | + zl P(zl , Al,,n )|zl | P  ? (8) j6=l P(Aj,,n ) B + E[?k,i (z) | z ? Z,n ]P(z ? Z,n ) ? (n ? 1)BP(z ? Z,n ) + E[?k,i (z) | z ? Z,n ]P(z ? Z,n ) (9) where Eq. 7 comes from zl being independent and B is the constant from ?2. Therefore, we have: n X  1 XX E[?k,i (zl ) | A,n ]P(A,n ) ? E[?k,i (z) | z ? Z,n ]P(z ? Z,n ) + n2 BP(z ? Z,n ) n l=1 k,i k,i (10) From the construction of our proper approximations (Proposition 4.2), we know that only derivations of length log2 n or greater can be in Z,n . Therefore: E[?k,i | Z,n ]P(Z,n ) ? X z:|z|>log2 P(z)?k,i (z) ? ? X l>log2 n L?(l)rl l ? ?q log 2 n = o(1) (11) n where ? > 0 is a constant. Similarly, we have P(z ? Z,n ) = o(n?2 ). This means that |E[Remp,n (fn? ) | A,n ]|P(A,n ) ?? 0. In addition, it can be shown |E[Remp,n (Cn (fn? )) | n?? A,n ]|P(A,n ) ?? 0 using the same proof technique we used above, while relying on the fact n?? that Cn (fn? ) ? Fn , and therefore Cn (fn? )(z) ? pN |z| log n. 5 Sample Complexity Results We now give our main sample complexity results for probabilistic grammars. These results hinge on the convergence of supf ?Fn |Remp,n (f ) ? R(f )|. The rate of this convergence can be fast, if the covering numbers for Fn do not grow too fast. We next give a brief overview of covering numbers. A cover gives a way to reduce a class of functions to a much smaller (finite, in fact) representative class such that each function in the original class is represented using a function in the smaller class. Let G be a class of functions. Let d(f, g) be a distance measure between two functions f, g from G. An -cover is a subset of G, denoted by G0 , such that for every f ? G there exists an f 0 ? G0 such that d(f, f 0 ) < . The covering number N(, G, d) is the size of the smallest -cover of G using with respect to the distance measure d. We will be interested in a specific distance measure that is dependent on the empirical distribution ? that describes the data z1 , ..., zn . Let f, g ? G. We will use: P P Pn ? 1 ? dP (f, g) = EP? [|f ? g|] = i=1 |f (zi ) ? g(zi )| z?D(G) |f (z) ? g(z)| P(z) = n (12) ? P Instead of using N(, G, d ) directly, we are going to bound this quantity with N(, G) = ? ? The following is the key supP? N(, G, dP ), where we consider all possible samples (yielding P). result about the connection between covering numbers and the double-sided convergence of the empirical process supf ?Fn |Remp,n (f ) ? R(f )| as n ? ?: Lemma 5.1. Let Fn be a permissible class2 of functions such that for every f ? Fn we have E[|f |I(|f | ? Kn )] ? bound (n). Let Ftruncated,n = {f ? I(f ? Kn ) | f ? Fm }, i.e., the set of 2 The ?permissible class? requirement is a mild regularity condition about measurability that holds for proper approximations. We refer the reader to [18] for more details. 5 functions from Fn after being truncated by Kn . Then for  > 0 we have, !  P sup |Remp,n (f ) ? R(f )| > 2 f ?Fn 1 ? 8N(/8, Ftruncated,n ) exp ? n2 /Kn2 128  + 2bound (n)/ provided n ? Kn2 /42 . Proof. See [18] (chapter 2, pages 30?31). See supplementary material for an explanation. Covering numbers are rather complex combinatorial quantities that are hard to compute directly. Fortunately, they can be bounded by using the pseudo dimension [3], a generalization of VC dimension for real functions. In the case of our ?binomialized? probabilistic grammars, the pseudo dimension of Fn is bounded by N , because we have Fn ? F, and the functions in F are linear with N parameters. Hence, Ftruncated,n has also pseudo dimension that is at most N . We have: Lemma 5.2. (From [18, 13].) Let Fn be the proper approximations for probabilistic grammars, for any 0 <  < Kn we have:  N 2eKn 2eKn N(, Ftruncated,n ) < 2 (13) log   5.1 Supervised Case Lemmas 5.1 and 5.2 can be combined to get our main sample complexity result: Theorem 5.3. Let G be a grammar. Let Fn be a proper approximation for the corresponding family of probabilistic grammars. Let P(x, z) be a distribution over derivations that satisfies the requirements in ?2. Let z1 , ..., zn be a sample of derivations. Then there exists a constant ?(L, q, p, N ) and constant M such that for any 0 < ? < 1 and 0 <  < 1 and any n > M and if     128Kn2 32 log 4/? + log 1/ n ? max 2N log(16eK /) + log , (14) n 2 ? ?(L, q, p, N ) then we have ! P sup |Remp,n (f ) ? R(f )| ? 2 ?1?? (15) f ?Fn where Kn = pN log3 n. Proof. Omitted for space. ?(L, q, p, N ) is the constant from Proposition 4.1. The proof is based on simple algebraic manipulation of the right side of Eq. 13 while relying on Lemma 5.2. 5.2 Unsupervised Case In the unsupervised setting, we have n yields of derivations from the grammar, x1 , ..., xn , and our goal again is to identify grammar parameters ? from these yields. Our concept classes are now the sets of log marginalized distributions from Fn . For each f? ? Fn , we define f?0 as: P  P P K PNk f?0 (x) = ? log z?Dx (G) exp(?f? (z)) = ? log z?Dx (G) exp ? (z)? i,k i,k k=1 i=1 (16) We denote the set of {f?0 } by Fn0 . We define analogously F0 . Note that we also need to define 0 the operator Cn0 (f 0 ) as a first step towards defining Fn0 as proper approximations P (for F ) in the 0 0 0 unsupervised setting. LetPf ? F . Let f be the concept in F such that f (x) = z f (z, x). Then we define Cn0 (f 0 )(x) = z Cn (f )(x, z). It is not immediate to show that Fn0 is a proper approximation for F0 . It is not hard to show that the boundedness property is satisfied with the same Kn and the same form of bound (n) as in Proposi0 tion 4.1 (we would have 0bound (m) = m?? log m for some ? 0 (L, q, p, N ) = ? 0 > 0). This relies on the property of bounded derivation length of P. See the supplementary material for a proof. The following result shows that we have tightness as well: 6 Proposition 5.4. There exists an M such that for any n > M we have: 2 S  N log n 0 0 0 P and the {x | C (f )(x) ? f (x) ?  (n)} ?  (n) for  (n) = 0 0 tail tail tail n f ?F p n ?1 0 operator Cn (f ) as defined above. P P Utility Lemma 5.5. For ai , bi ? 0, if ? log i ai + log i bi ?  then there exists an i such that ? log ai + log bi ? . Sketch of proof of Proposition 5.4. From Utility Lemma 5.5 we have: ? ? ? ? [ [ P? {x | Cn0 (f 0 )(x) ? f 0 (x) ? tail (n)}? ? P ? {x | ?zCn (f )(z) ? f (z) ? tail (n)}? f 0 ?F 0 f ?F (17) Define X(n) to be all x such that there exists a z with yield(z) = x and |z| ? log n. From the proof of Proposition 4.2 and the requirements on P, we know that there exists an ? ? 1 such that S  X P {x | ?z s.t. C (f )(z) ? f (z) ?  (n)} ? P(x) n tail f ?F 2 x?X(n) X ? P(x) ? x:|x|?log2 n/? ? X L?(k)rk (18) ? tail (n) k=blog2 n/?c where the last inequality happens for some n larger than a fixed M . Computing either the covering number or the pseudo dimension of Fn0 is a hard task, because the function in the classes includes the ?log-sum-exp.? In [9], Dasgupta overcomes this problem for Bayesian networks with fixed structure by giving a bound on the covering number for (his respective) F0 that depends on the covering number of F. Unfortunately, we cannot fully adopt this approach, because the derivations of a probabilistic grammar can be arbitrarily large. We overcome this problem using the following restriction. We assume that |Dx (G)| < d(n), where d is a function mapping n, the size of our sample, to a real number. The more samples we have, the more permissive (for large derivation set) the grammar can be. On the other hand, the more accuracy we desire, the more restricted we are in choosing grammars that have a large derivation set. We refer to this restriction as the ?derivational condition.? With the derivational condition, we can show the following result: Proposition 5.6. (Hidden Variable Rule for Probabilistic Grammars) Under the derivational condi0 tion, N(, Ftruncated,n ) ? N(/d(n), Ftruncated,n ). The proof of Proposition 5.6 is almost identical to the proof of the hidden variable rule in [9]. For the unsupervised case, then, we get the following sample complexity result: Theorem 5.7. Let G be a grammar. Let Fn0 be a proper approximation for the corresponding family of probabilistic grammars. Let P(x, z) be a distribution over derivations that satisfies the requirements in ?2. Let x1 , ..., xn be a sample of strings from P(x). Then there exists a constant ? 0 (L, q, p, N ) and constant M such that for any 0 < ? < 1 and 0 <  < 1 and any n > M and if     32 log 4/? + log 1/ 128Kn2 2N log(16eK d(n)/) + log , (19) n ? max n 2 ? ? 0 (L, q, p, N ) and |Dx (G)| < d(n), we have that ! P sup |Remp,n (f ) ? R(f )| ? 2 ?1?? (20) 0 f ?Fn where Kn = pN log3 n. For this sample complexity bound to be non-trivial, for example, we can restrict Dx (G), through d(n), to have a polynomial size in the number of our?samples. Enlarging d(n) is possible even to an exponential function of n? for ? < 1, e.g. d(n) = 2 n . 7 criterion tightness of proper approximation sample complexity bound as Kn increases . . . improves as d(n) increases . . . no effect as p increases . . . improves degrades degrades degrades Table 1: Trade-off between quantities in our learning model and effectiveness of different criteria. d(n) is the function that gives the derivational condition, i.e., |Dx (G)| ? d(n). 6 Discussion Our framework can be specialized to improve the two main criteria that have a trade-off: the tightness of the proper approximation and the sample complexity. For example, we can improve the tightness of our proper approximations by taking a subsequence of Fn . However, this will make the sample complexity bound degrade, because Kn will grow faster. Table 1 gives the different tradeoffs between parameters in our model and the effectiveness of learning. In general, we would want the derivational condition to be removed (choose d(n) = ?, or at least allow d(n) = ?(tn ) for some t, for small samples), but in that case our sample complexity bounds become trivial. In the supervised case, our result states that the number of samples we require (as an upper bound) grows mostly because of a term that behaves O(N 3 log N ) (for a fixed ? and ). If our grammar, for example, is a PCFG, then N depends on the total number of rules. When the PCFG is in Chomsky normal form and lexicalized [10, 7], then N grows by an order of V 2 , where V is the vocabulary size. This means that the bound grows by an order of O(V 6 log V ). This is consistent with conventional wisdom that lexicalized grammars require much more data for accurate learning. The dependence of the bound on N suggests that it is easier to learn models with a smaller grammar size. This may help explain the success of recent advances in supervised parsing [4, 22, 17] that have ?coarse? models (with a much smaller size of nontermimals) as a first pass. Those models are easier to learn and require less data to be accurate, and can serve as base models for later phases. The sample complexity bound for the unsupervised case suggests that we need log d(n) times as much data to achieve estimates as good as those for supervised learning. Interestingly, with unsupervised grammar learning, available training sentences longer than a maximum length (e.g., 10) are often ignored; see [14]. We note that sample complexity is not the only measure for the complexity of estimating probabilistic grammars. In the unsupervised setting, for example, the computational complexity of ERM is NP hard for PCFGs [5] or probabilistic automata [2]. 7 Conclusion We presented a framework for learning the parameters of a probabilistic grammar under the log-loss and derived sample complexity bounds for it. We motivated this framework by showing that the empirical risk minimizer for our approximate framework is an asymptotic empirical risk minimizer. Our framework uses a sequence of approximations to a family of probabilistic grammars, which improves as we have more data, to give distribution dependent sample complexity bounds in the supervised and unsupervised settings. Acknowledgements We thank the anonymous reviewers for their comments and Avrim Blum, Steve Hanneke, and Dan Roth for useful conversations. This research was supported by NSF grant IIS-0915187. References [1] N. Abe, J. Takeuchi, and M. Warmuth. Polynomial learnability of probabilistic concepts with respect to the Kullback-Leiber divergence. In ACM Conference on Computational Learning Theory, 1990. 8 [2] N. Abe and M. Warmuth. On the computational complexity of approximating distributions by probabilistic automata. Machine Learning, 2:205?260, 1992. [3] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999. [4] E. Charniak and M. Johnson. Coarse-to-fine n-best parsing and maxent discriminative reranking. In Proc. of ACL, 2005. [5] S. B. Cohen and N. A. Smith. Viterbi training for PCFGs: Hardness results and competitiveness of uniform initialization. In Proceedings of ACL, 2010. [6] S. B. Cohen and N. A. Smith. Empirical risk minimization for probabilistic grammars: Sample complexity and hardness of learning, in preparation. [7] M. Collins. Head-driven statistical models for natural language processing. Computational Linguistics, 29:589?637, 2003. [8] M. Collins. Parameter estimation for statistical parsing models: theory and practice of distribution-free methods. Text, Speech and Language Technology (new developments in parsing technology), pages 19?55, 2004. [9] S. Dasgupta. The sample complexity of learning fixed-structure Bayesian networks. Machine Learning, 29(2-3):165?180, 1997. [10] J. Eisner. Three new probabilistic models for dependency parsing: An exploration. In Proc. of COLING, 1996. [11] E. M. Gold. Language identification in the limit. Information and Control, 10(5):447?474, 1967. [12] G. Guerra and Y. Aloimonos. Discovering a language for human activity. In AAAI Workshop on Anticipation in Cognitive Systems, 2005. [13] D. Haussler. Decision-theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100:78?150, 1992. [14] D. Klein and C. D. Manning. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proc. of ACL, 2004. [15] V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34(6):2593?2656, 2006. [16] L. Lin, T. Wu, J. Porway, and Z. Xu. A stochastic graph grammar for compositional object representation and recognition. Pattern Recognition, 8, 2009. [17] S. Petrov and D. Klein. Improved inference for unlexicalized parsing. In Proc. of HLT-NAACL, 2007. [18] D. Pollard. Convergence of Stochastic Processes. New York: Springer-Verlag, 1984. [19] Y. Sakakibara, M. Brown, R. Hughey, S. Mian, K. Sj?olander, R. C. Underwood, and D. Haussler. Stochastic context-free grammars for tRNA modeling. Nucleic Acids Research, 22, 1994. [20] A. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135?166, 2004. [21] V. N. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998. [22] D. Weiss and B. Taskar. Structured prediction cascades. In Proceedings of AISTATS, 2010. 9
3975 |@word mild:1 polynomial:2 km:8 boundedness:3 recursively:1 charniak:1 interestingly:1 dx:7 reminiscent:1 parsing:7 must:1 fn:34 implying:2 generative:1 reranking:1 discovering:1 warmuth:2 ith:1 smith:3 lr:1 characterization:1 coarse:2 unbounded:1 constructed:1 become:1 competitiveness:1 consists:1 dan:1 interscience:1 indeed:1 hardness:2 expected:4 relying:2 decreasing:1 increasing:3 revision:1 becomes:1 xx:1 notation:1 bounded:9 provided:1 mass:1 estimating:1 kind:2 argmin:1 string:6 cm:4 transformation:1 pseudo:4 every:4 unknowable:1 classifier:1 zl:13 control:1 grant:1 local:1 limit:1 trna:1 acl:3 initialization:1 koltchinskii:1 suggests:2 pcfgs:4 bi:3 practice:1 empirical:20 cascade:1 anticipation:1 chomsky:1 get:3 cannot:2 operator:4 put:1 risk:22 context:2 restriction:3 conventional:1 reviewer:1 missing:1 roth:1 attention:1 automaton:2 unify:1 simplicity:1 identifying:1 rule:6 estimator:1 haussler:2 deriving:1 his:1 notion:2 justification:1 annals:2 construction:1 us:1 hypothesis:1 pa:2 recognition:2 ep:2 taskar:1 leiber:1 ensures:1 trade:2 removed:1 complexity:24 motivate:1 rewrite:1 tight:1 serve:1 f2:1 represented:1 chapter:1 derivation:30 fast:5 choosing:2 whose:1 supplementary:6 larger:4 posed:1 say:1 tightness:5 otherwise:1 grammar:54 statistic:2 syntactic:1 itself:1 pnk:4 sequence:9 net:1 relevant:1 achieve:1 gold:1 convergence:16 double:2 requirement:8 regularity:1 rademacher:1 generating:1 converges:3 object:1 help:2 derive:3 nonterminal:1 school:2 received:1 eq:2 olander:1 c:2 implies:1 come:1 stochastic:4 vc:1 exploration:1 human:2 material:6 require:4 f1:1 fix:1 generalization:2 anonymous:1 proposition:13 hold:3 normal:1 exp:5 great:1 mapping:1 viterbi:1 driving:1 vary:1 adopt:1 smallest:1 omitted:1 estimation:7 proc:4 combinatorial:1 agrees:1 minimization:8 rather:1 pn:8 derived:1 focus:1 properly:2 likelihood:4 sense:1 inference:2 dependent:4 hidden:2 going:1 interested:1 denoted:2 logn:1 development:1 construct:2 biology:2 identical:1 unsupervised:11 minimized:2 np:1 few:1 divergence:1 argmax:1 phase:1 yielding:1 accurate:4 respective:1 tree:1 incomplete:1 maxent:1 theoretical:1 formalism:1 modeling:1 compelling:1 gn:6 cover:3 zn:5 phrase:1 subset:1 uniform:9 johnson:1 too:1 learnability:1 kn:9 dependency:2 chooses:1 combined:1 probabilistic:39 off:2 analogously:1 again:1 aaai:1 satisfied:1 choose:1 cognitive:1 ek:2 return:1 supp:1 includes:2 mp:2 depends:2 tion:2 view:1 h1:1 later:1 sup:3 aggregation:1 fn0:5 accuracy:1 takeuchi:1 qk:1 acid:1 yield:6 identify:1 wisdom:1 bayesian:2 identification:1 hanneke:1 j6:2 sakakibara:1 explain:1 hlt:1 petrov:1 proof:13 permissive:1 dataset:1 remp:24 revise:1 conversation:1 improves:3 organized:1 formalize:1 actually:2 zcn:1 steve:1 nasmith:1 supervised:11 improved:1 wei:1 hughey:1 mian:1 hand:2 sketch:1 expressive:1 kn2:4 defines:2 aj:5 perhaps:1 measurability:1 believe:1 grows:4 usa:2 effect:2 naacl:1 concept:14 true:1 brown:1 hence:1 conditionally:1 qnk:1 covering:8 criterion:3 complete:1 theoretic:1 tn:1 recently:1 common:1 specialized:1 behaves:1 multinomial:1 ekn:2 rl:1 overview:3 cohen:3 exponentially:1 tail:18 approximates:3 mellon:2 refer:4 cambridge:1 ai:3 consistency:1 similarly:1 language:8 f0:5 longer:1 base:1 recent:1 driven:1 manipulation:1 verlag:1 cn0:3 inequality:3 arbitrarily:3 success:1 greater:1 fortunately:1 maximize:1 semi:1 relates:1 ii:1 faster:1 lin:1 mle:1 prediction:1 variant:1 involving:1 vision:1 cmu:2 expectation:2 addition:2 want:1 fine:1 grow:3 permissible:2 envelope:1 rest:2 scohen:1 comment:1 effectiveness:2 call:1 structural:2 zi:4 fm:15 restrict:1 reduce:1 cn:7 tradeoff:1 shift:1 motivated:2 utility:2 bartlett:1 algebraic:1 speech:1 pollard:1 york:1 compositional:2 ignored:1 useful:2 tsybakov:2 blog2:1 concentrated:1 constituency:1 generate:1 zj:1 nsf:1 estimated:1 klein:2 carnegie:2 dasgupta:2 key:1 blum:1 rewriting:1 graph:1 sum:1 family:5 reasonable:1 reader:1 almost:1 wu:1 decision:1 bound:30 replaces:1 oracle:1 activity:2 noah:1 bp:2 generates:1 structured:1 according:2 manning:1 describes:2 smaller:4 increasingly:1 making:1 happens:1 restricted:1 erm:1 sided:3 taken:1 discus:2 count:1 know:3 available:1 permit:2 apply:1 original:3 denotes:1 linguistics:1 underwood:1 log2:8 hinge:1 marginalized:1 giving:1 eisner:1 approximating:1 g0:2 quantity:4 occurs:1 degrades:3 dependence:1 kth:2 dp:2 distance:3 thank:1 degrade:1 trivial:2 unlexicalized:1 induction:1 length:9 index:1 relationship:1 minimizing:2 equivalently:1 difficult:1 unfortunately:1 mostly:1 argminf:4 proper:19 upper:1 nucleic:1 revised:1 markov:1 finite:4 truncated:1 immediate:1 defining:1 head:1 abe:2 required:3 class2:1 z1:5 connection:1 sentence:1 aloimonos:1 usually:1 below:1 guerra:1 pattern:1 max:2 explanation:1 power:1 event:6 natural:3 rely:1 force:1 improve:2 technology:4 brief:1 text:1 literature:1 acknowledgement:1 asymptotic:5 loss:8 fully:2 suph:2 derivational:5 foundation:1 shay:1 consistent:2 heavy:1 production:1 token:1 supported:1 last:1 free:4 side:1 allow:1 institute:2 taking:3 grammatical:5 overcome:2 dimension:5 xn:2 world:1 vocabulary:1 commonly:1 collection:1 log3:3 sj:1 approximate:2 kullback:1 overcomes:1 corpus:1 pittsburgh:2 conclude:1 containment:1 discriminative:1 subsequence:2 table:2 learn:2 complex:1 constructing:2 anthony:1 aistats:1 pk:4 main:4 noise:1 n2:1 child:1 x1:2 xu:1 referred:1 representative:1 wiley:1 exponential:4 third:1 coling:1 porway:1 rk:2 theorem:2 enlarging:1 familiarity:1 specific:1 showing:4 pac:1 symbol:2 decay:3 evidence:1 exists:11 workshop:1 lexicalized:2 vapnik:2 sequential:1 pcfg:4 equates:1 avrim:1 nk:5 easier:2 supf:4 infinitely:1 expressed:2 contained:1 desire:1 springer:1 minimizer:9 determines:1 satisfies:2 relies:1 acm:1 viewed:1 goal:1 towards:1 hard:5 change:1 uniformly:1 lemma:7 total:2 pas:1 support:1 latter:1 collins:2 preparation:1
3,285
3,976
Sparse Instrumental Variables (SPIV) for Genome-Wide Studies Felix V. Agakov Public Health Sciences University of Edinburgh [email protected] Paul McKeigue Public Health Sciences University of Edinburgh [email protected] Jon Krohn WTCHG, Oxford [email protected] Amos Storkey School of Informatics University of Edinburgh [email protected] Abstract This paper describes a probabilistic framework for studying associations between multiple genotypes, biomarkers, and phenotypic traits in the presence of noise and unobserved confounders for large genetic studies. The framework builds on sparse linear methods developed for regression and modified here for inferring causal structures of richer networks with latent variables. The method is motivated by the use of genotypes as ?instruments? to infer causal associations between phenotypic biomarkers and outcomes, without making the common restrictive assumptions of instrumental variable methods. The method may be used for an effective screening of potentially interesting genotype-phenotype and biomarker-phenotype associations in genome-wide studies, which may have important implications for validating biomarkers as possible proxy endpoints for early-stage clinical trials. Where the biomarkers are gene transcripts, the method can be used for fine mapping of quantitative trait loci (QTLs) detected in genetic linkage studies. The method is applied for examining effects of gene transcript levels in the liver on plasma HDL cholesterol levels for a sample of sequenced mice from a heterogeneous stock, with ? 105 genetic instruments and ? 47 ? 103 gene transcripts. 1 Introduction A problem common to both epidemiology and to systems biology is to infer causal relationships between phenotypic measurements (biomarkers) and disease outcomes or quantitative traits. The problem is complicated by the fact that in large bio-medical studies, the number of possible genetic and environmental causes is very large, which makes it implausible to conduct exhaustive interventional experiments. Moreover, it is generally impossible to remove the confounding bias due to unmeasured latent variables which influence associations between biomarkers and outcomes. Also, in situations when the biomarkers are mRNA transcript levels, the measurements are known to be quite noisy; additionally, the number of unique candidate causes may exceed the number of observations by several orders of magnitude (the p ? n problem). A fundamentally important practical task is to reduce the number of possible causes of a trait to a much more manageable subset of candidates for controlled interventions. Developing an efficient framework for addressing this problem may be fundamental for overcoming bottlenecks in drug development, with possible applications in the validation of biomarkers as causal risk factors, or developing proxies for clinical trials. Whether or not causation may be inferred from observational data has been a matter of philosophical debate. Pearl [28] argues that causal assumptions cannot be verified unless one makes a recourse 1 to experimental control, and that there is nothing in the probability distribution p(x, y) which can tell whether a change in x may have an effect on y. Traditional discussions of causality are largely focused on the question of identifiability, i.e. determining sets of graph-theoretic conditions when a post-intervention distribution p(y|do(x)) may be uniquely determined from a pre-intervention distribution p(y, x, z) [27, 4, 32]. If the causal effects are shown to be identifiable, their magnitudes can be obtained by statistical estimation, which for common models often reduces to solving systems of linear equations. In contrast, from the Bayesian perspective, the causality detection problem may be viewed as that of model selection, where a model Mx?y is compared with My?x . The problem is complicated by the likelihood-equivalence, where for each setting of parameters of one model there may exist a setting of parameters of the other giving rise to the identical likelihoods. However, unless the priors are chosen in such a way that Mx?y and My?x also have identical posteriors, it may be possible to infer the direction of the arrow. The view that the priors of likelihood-equivalent models do not need to be set to ensure the equivalence of the posteriors is in contrast to e.g. [12] (and references therein), but has been defended by MacKay (see [21], Section 35). In this paper we are leaving aside debates about the nature of causality and focus instead on identifying a set of candidate causes for a large partially observed under-determined genetic problem. The approach builds on the instrumental variable methods that were historically used in epidemiological studies, and on approximate Bayesian inference in sparse linear latent variable models. Specific modeling hypotheses are tested by comparing approximate marginal likelihoods of the corresponding direct, reverse, and pleiotropic models with and without latent confounders, where we follow [21] in allowing for flexible priors. The approach is largely motivated by the observation that independent variables do not establish a causal relation, while strong unconfounded direct dependencies retained in the posterior modes even under large sparseness-inducing penalties may indicate potential causality and suggest candidates for further controlled experiments. 2 Previous work Inference of causal direction of x on y is to some extent simplified if we assume existence of an auxiliary variable g, such that g?s effect on x may only be causal, and g?s effect on y may only be through x. The idea is exploited in instrumental variable methods [3, 2, 29] which typically deal with low-dimensional linear models, where the strength of the causal effect may be estimated as wx?y = cov(g, y)/cov(g, x). Note also that the hypothesized cause-outcome models such as Mg?x?y and Mg?y?x are no longer Markov-equivalent, i.e. it may be possible to select an appropriate model via likelihood-based tests. Selecting a plausible instrument g may be difficult in some domains; however, in genetic studies it may be possible to exploit as an instrument a measure of genotypic variation. In quantitative genetics, such applications of instrumental variable methods have been termed Mendelian randomization [15, 34]. In accordance with the requirements of the classic instrumental variable methods, it is assumed that effects of the genetic instrument g on the biomarker x are unconfounded, and that effects of the instrument on the outcome y are mediated only through the biomarker (i.e. there is no pleiotropy) [17, 35]. The former assumption is grounded in the laws of Mendelian genetics and is satisfied as long as population stratification has been adequately controlled. However, the assumption of no hidden pleiotropy severely restricts the application of this approach, as most genotypic effects on complex traits are not sufficiently well understood to exclude pleiotropy as a possible explanation of an association. Thus the classical instrumental variable argument is limited to biomarkers for which suitable non-pleiotropic instruments exist, and cannot be easily extended to exploit studies with multiple biomarkers and genome-wide data. A more general approach to exploiting genotypic variation to infer causal relationships between gene transcript levels and quantitative traits has been developed by Schadt et. al. [30] and subsequently extended (see e.g. [5]). They relax the assumption of no pleiotropy, but instead compare models with and without pleiotropy by computing standard likelihood-based scores. After filtering to select a set of gene transcripts {xj } that are associated with the trait y, and loci {gi } at which genotypes have effects on transcript levels xj , each possible triad of marker locus gi , transcript xj and trait y is evaluated to compare three possible models: causal effect of transcript on trait, reverse causation, and a pleiotropic model (see Figure 1 left, (i)?(iii)). The support for these three models is compared by a measure of model fit penalized by complexity: either Akaike?s Information Criterion (AIC) [30], or the Bayesian Information Criterion (BIC) [5]. Schadt et. al. [30] denote this procedure as the ?likelihood-based causality model selection? (LCMS) approach. While the LCMS 2 gi xj y (i) p(AIC ?AIC CSL ) REV 8 (ii) gi y gi xj xj 7 6 y (iii) 5 4 gi 3 xj (iv) gk y 2 1 0 ?6 ?4 ?2 0 2 4 6 ?2 0 2 4 6 8 Figure 1: Left: (i?iii): Causal, reverse, and pleiotropic models of the LCMS approach [30]; (iv): pleiotropic model with two genetic instruments. Center: Possible arbitrariness of LCMS inference. The histogram shows the difference of the AIC scores for the causal and reverse models for a fixed biomarker and outcome, and various choices of loci from predictive regions. Right: AIC scores of the causal (top) and reverse (bottom) models for each choice of instrument gi (the straight lines link the scores for a fixed choice of gi ). Scores were centered relative to those of the pleiotropic model. Biomarker and outcome are liver expressions of Cyp27b1 and plasma HDL measurements for heterogeneous mice. Based on the choice of gi , either causal or reverse explanations are favored. and related methods [30, 5] relax the assumption of no hidden pleiotropy of the classic Mendelian randomization method, they have three key limitations. First, effects of loci and biomarkers on outcomes are not modeled jointly, so widely varying inferences are possible depending on the choice of the triads {gi , xj , y}. Figure 1 center, right compares differences in the AIC scores for the causal and reverse models constructed for a fixed biomarker and outcome, and for various choices of the genetic instruments from the predictive region. Depending on the choice of instrument gi , either causal or reverse explanations are favored. A second key limitation is that the LCMS method does not allow for dependencies between multiple biomarkers, measurement noise, or latent variables (such as unobserved confounders of the biomarker-outcome associations). Thus, for instance, without allowance for noise in the biomarker measurements, non-zero conditional mutual information I(gi , y|xj ) will be interpreted as evidence of pleiotropy or reverse causation even when the relation between the underlying biomarker and outcome is causal. Also, the method is not Bayesian (the BIC score is only a crude approximation to the Bayesian procedure for model selection). One extension of the classic instrumental variable methods has been proposed by [4], who described graph-theoretic conditions which need to be satisfied in order for parameters of edges xi ? y to be identifiable by solving a system of linear equations; however, they focus on the identifiability problem rather than on addressing a large practical under-determined task with latent variables. For example, their method does not allow for an easy integration of unmeasured confounders with unknown correlations with the intermediate and outcome variables. Another approach to modeling joint effects of genetic loci and biomarkers (gene expressions) was described by [41]. They modeled the expression measurements as three ordered levels, and used a biased greedy search over model structures from multiple starting points, to find models with high BIC scores. Though applicable for large-scale studies, the approach does not allow for measurement noise or latent variables (and looses information by using categorical measurements). The vast majority of other recent model selection and structure learning methods from machine learning literature are also either not easily extended to include latent confounders (e.g. [16], [19], [22]), or applicable only for dealing with relatively low-dimensional problems with abundant data (e.g. [33] and references therein). 3 Methods To address the problem of causal discovery in large bio-medical studies, we need a unified framework for modeling relations between genotypes, biomarkers, and outcomes that is computationally tractable to handle a large number of variables. Our approach extends LCMS and the instrumental variable methods by the joint modeling of effects of genetic loci and biomarkers, and by allowing for both pleiotropic genotypic effects and latent variables that generate couplings between biomarkers and confound the biomarker-outcome associations. It relies on Bayesian modeling of linear associations between the modeled variables, with sparseness-inducing priors on the linear weights. The 3 i = 1...n U ??x g(i) V ?z 2 Bayes Factor: log10 Lx?>y ? log10 Lx<?z?>y, ?z =1.0 ?x 1.4 ?0.28 1.2 x(i) z(i) Wg Wz ?y(i) y(i) ?y Empirical correlation ? ?0.21 ?x(i) ??y ?0.35 1 ?0.14 0.8 ?0.07 0.6 0.00 0.4 0.07 0.2 0.14 0.22 0 0.29 ?0.2 0.35 0.05 0.10 0.19 0.38 0.74 1.46 2.87 5.64 11.09 21.78 40.0 Concentration parameter ? W 1 Figure 2: Left: SPIV structure. Filled/clear nodes correspond to observed/ latent variables. Right: log Bayes factor of Mx?z?y and Mx?y as a function of empirical correlations ? and ?1 for n = 100 observations, ?z2 = ?x2 = ?y2 = 1, |x| = |y| = |z| = 1 and ?2 = 0, on the log10 scale. For intermediate ?1 ?s and high empirical correlations, there is a strong preference for the causal model. Bayesian framework allows prior biological information to be included if available: for instance, cis-acting genotypic effects on transcript levels are likely to be stronger and less pleiotropic than trans-acting effects on transcript levels. It also offers a rigorous approach to model comparison, and is particularly attractive for addressing under-determined genetics problems (p ? n). The method builds on automatic relevance determination approaches (e.g. [20], [25], [37]) and adaptive shrinkage (e.g. [36], [8], [42]). Here it is used in the context of sparse multi-factor instrumental variable analysis in the presence of unobserved confounders, pleiotropy, and noise. Model Parameterization Our sparse instrumental variables model (SPIV) is specified with four classes of variables: genotypic and environmental covariates g ? R|g| , phenotypic biomarkers x ? R|x| , outcomes y ? R|y| , and latent factors z1 , . . . , z|z| . The dimensionality of the latent factors |z| is fixed at a moderately high value (extraneous dimensions will tend to be pruned under the sparse prior). The latent factors z play two major roles: they represent the shared structure between groups of biomarkers, and confound biomarker-outcome associations. The biomarkers x and outcomes y are specified as hidden variables inferred from noisy observations ?x ? R|?x| and ?y ? R|?y| (note that |?x| = |x|, |?y| = |y|). The effects of genotype on biomarkers and outcome are assumed to be unconfounded. Pleiotropic effects of genotype (effects on outcome that are not mediated through the phenotypic biomarkers) are accounted for by an explicit parameterization of p(y|g, x, z). Graphical representation of the model is shown on Figure 2 (left). It is clear that the SPIV structure extends that of the instrumental variable methods [2, 3, 29] by allowing for the pleiotropic links, and also extends the pleiotropic model of Schadt et. al. [30] (Figure 1 left (iii)) by allowing for multiple instruments and latent variables. All the likelihood terms of p(x, ?x, y, ?y, z|g) are linear Gaussians with diagonal covariances x = UT g + VT z + ex , y = WT x + WzT z + WgT g + ey , ?x = Ax + e?x , (1) and ?y = y + e?y , where e?x ? N (0, ?y ), ey ? N (0, ?y ), e?y ? N (0, ??y ), e?x ? N (0, ??x ), z ? N (0, ?z ), W ? R|x|?|y| , Wz ? R|z|?|y| , Wg ? R|g|?|y| , V ? R|z|?|x| , U ? R|g|?|x| are regression coefficients (factor loadings) ? for clarity, we assume the data is centered. A ? R|x|?|x| has a banded structure (accounting for possible couplings of the neighboring microarray measurements). Prior Distribution All model parameters are specified as random variables with prior distributions. For computational convenience, the variance components of the diagonal covariances ?y , ??y , etc. are specified with inverse Gamma priors ??1 (ai , bi ), with hyperparameters ai and bi fixed at values motivating the prior beliefs about the projection noise (often available to lab technicians collecting trait or biomarker measurements). One way to view the latent confounders z is as missing genotypes or environmental covariates, so that prior variances of the latent factors are peaked at values representative of the empirical variances of the instruments g. Empirically, the choice of priors on the variance components appears to be relatively unimportant, and other choices may be considered [9]. 4 The considered choice of a sparseness-inducing prior on parameters W, Wz , Wg , etc. is a product of zero-mean Laplace and zero-mean normal distributions p(w) ? |w| Y Lwi (0, ?1 )Nwi (0, ?2 ), (2) i=1 Lwi (0, ?1 ) ? exp{??1 |wi |}, and Nwi (0, ?2 ) ? exp{??2 wi2 }. Due to the heavy tails of the Laplacian Lwi , the prior p(w) is flexible enough to capture large associations even if they are rare. Higher values of ?1 give a stronger tendency to shrink irrelevant weights to zero. It is possible to set different ?1 parameters for different linear weights (e.g. for the cis- and trans-acting effects); however, for clarity of this presentation we shall only use a global parameter ?1 . The isotropic Gaussian component with the inverse variance ?2 contributes to the grouping effect (see [42], Theorem 1). The considered family of priors (2) induces better consistency properties [40] than the commonly used Laplacians [36, 9, 39, 26, 31]. It has also been shown [14] that important associations between variables may be recovered even for severely under-determined problems (p ? n) common in genetics. The SPIV model with p(w) defined as in (2) generalizes LASSO and elastic net regression [36, 42]. As a special case, it also includes sparse conditional factor analysis. Other sparse priors on the weights, such as Student-t, ?spike-and-slab?, or inducing Lq<1 penalties tend to result in less tractable posteriors even for linear regression [10, 37, 8], which also motivates the choice (2). Some additional intuition of the influence of the sparse prior on the causal inference may be gained by numerically comparing the marginal likelihoods of the Markov-equivalent models with and without confounders Mx?z?y , Mx?y . (Comparison of these models is of particular importance in epidemiology, because while the temporal data may often be available for distinguishing direct and reverse models Mx?y and My?x , it is generally difficult to ensure that there is no confounding). Figure 2 shows that when the empirical correlations are strong and ?1 is at intermediate levels, there is a strong preference for a causal model. This is because the alternative model with the confounders will have more parameters, and the weights will need to be larger (and therefore more strongly penalized by the prior) in order to lead to the same likelihood (note that for var(x) = var(y) = 1, the likelihood-equivalence is achieved for w = vwz , |w| ? 1). Larger values of ?1 will tend to strongly penalize all the weights, which makes the models largely indistinguishable. Also, as the number of genetic instruments grows, evidence in favor of the causal or pleiotropic model will be less dependent upon the priors on model parameters. For instance, with two genotypic variables that perturb a single transcript, the causal model has three adjustable parameters, but the pleiotropic model has five (see Figure 1 left, (iv)). Where several genotypic variables perturb a single transcript and the causal model fits the data nearly as well as the pleiotropic model, the causal model will have higher marginal likelihood under almost any plausible prior, because the slightly better fit of the pleiotropic model will be outweighed by the penalty imposed by several extra adjustable parameters. Inference While the choice of prior (2) encourages sparse solutions, it makes exact inference of the posterior parameters p(?|D) analytically intractable. The most efficient approach is based on the maximuma-posteriori (MAP) treatment ([36], [9]), which reduces to solving the optimization problem ?M AP = arg max {log p ({?y}, {?x}|{g}, ?) + log p(?)} ? (3) for the joint parameters ?, where the latent variables have been integrated out. Note that the MAP solution for SPIV may also be easily derived for the semi-supervised case where the biomarker and outcome vectors are only partially observed. Compared to other approximations of inference in sparse linear models based e.g. on sampling or expectation propagation [26, 31], the MAP approximation allows for an efficient handling of very large networks with multiple instruments and biomarkers, and makes it straightforward to incorporate latent confounders. Depending on the choice of the global sparseness and grouping hyperparameters ?1 , ?2 , the obtained solutions for the weights will tend to be sparse, which is also in contrast to the full inference methods. In high dimensions in particular, the parsimony induced by the point-estimates will facilitate structure discovery and interpretations of the findings. One way to optimize (3) is by an EM-like algorithm. For example, the fixed-point update for ui ? R|g| linking biomarker xi with the vector of instruments g is easily expressed as ?1    (t) ? (t?1) + ?2 I|g| GT hxi i ? GT hZivi , (4) ui = GT G + ?x2i ?1 U i 5 Figure 3: Top: SPIV for artificial datasets. Left/right plots show typical applications for the high and low observation noise (?x2? = 0.25 and ?x2? = 0.05 respectively). Top and bottom rows of each Hinton diagram correspond to the ground truth and the MAP weights U (1?18), W (19?21), Wg (22? 27). Bottom: SPIV for a genome-wide study of causal effects on HDL in heterogeneous stock mice. Left/right plots show maximum a-posteriori weights ?M AP and the mutual information I(xi , y|e) between the unobserved biomarkers and outcome evaluated from the model at ?M AP , under the joint Gaussian assumption. A cluster of pleiotropic links on chromosome 1 at about 173 MBP is consistent with biology. The biomarker with the strongest unconfounded effect on HDL is Cyp27b1. Transcripts that are most predictive of HDL through their links with pleiotropic genetic markers on chrom 1 are Uap1, Rgs5, Apoa2, and Nr1i3. Parameters ?1,2 have been obtained by cross-validation. ? i )kl = ?kl /|uki | ?k, l ? [1, |g|] ? Z, xi ? Rn , Z ? Rn?|z| , where G ? Rn?|g| is the design matrix, (U |z| 2 vi ? R , and ?xi = (?x )ii . The expectations h.i are computed with respect to p(.|{?x}, {?y}, {g}), which for (1) are easily expressed in the closed form. The rest is expressed analogously, and extensions to the partially observed cases are straight-forward. Faster (although more heuristic) alternatives may be used for speeding up the M-step (e.g. [7]). The hyperparameters may be set by cross-validation, marginalized out by specifying a hyper-prior, or set heuristically based on the expected number of links to be retained in the posterior mode. Once a sparse representation is produced by pruning irrelevant dimensions, more computationally-intensive inference methods for the full posterior (such as expectation propagation or MCMC) may be used in the resulting lowerdimensional model if needed. After fitting SPIV to data, formal hypotheses tests were performed by comparing the marginal likelihoods of the specific models for the retained instruments, biomarkers, and target outcomes. These were evaluated by the Laplace approximation at ?M AP (e.g. [20]). 4 Results Artificial data: We applied SPIV to several simulated datasets, and compared specific modeling hypotheses for the biomarkers retained in the posterior modes. The structures were consistent with the generic SPIV model, with all non-zero weights sampled from N (0, 1). Figure 3 (top) shows typical results for the high/low observation noise (?i, ?x2?i = ?y2? = 0.25/0.05). Note excellent signconsistency of the results for the more important factors. Separate simulations showed robustness under multiple EM runs and under- or over-estimation of the true number of confounders. Subsequent testing of the specific modeling hypotheses for the most important factors resulted in the correct discrimination of causal and confounded associations in ?86% of cases. Genome-wide study of HDL cholesterol in mice: To demonstrate our method for a large-scale practical application, we examined effects of gene transcript levels in the liver on plasma highdensity lipoprotein (HDL) cholesterol levels for a mice from a heterogeneous stock. The genetic factors influencing HDL in mice have been well explored in biology e.g. by Valdar et. al. [38]. The gene expression data was collected and preprocessed by [13], who have kindly agreed to share a part of their data. Breeding pairs for the stock were obtained at 50 generations after the stock 6 Olfr1453 AC150274.2 4933429F08Rik Cidea St8sia5 Atp5j Fez2 AW061290 Ear14 2 Glycam1 1 Prkar2b Rbm25 , ? = 40.0, ? = 10.0 MAP Trpv3 5530401A14Rik Tbx2 1110001A07Rik Hbs1l 4930520K10Rik Cyp27b1 Brd7 Mbd3l2 Tmem45b Ttc12 Isl2 Pik3r4 BC053749 Msx3 Hadha Csn1s1 Dck 4930417M19Rik Coq4 Trim33 Dph5 Slc30a7 Dbt Lrrc39 Agl Snx7 Abcd3 Rgs5 Uap1 Nr1i3 Apoa2 Fcer1g Slamf7 MI between biomarkers and HDL at ? foundation. At each of the 12500 marker loci, genotypes were described by 8-D vectors of expected founder ancestry proportions inferred from the raw marker genotypes by an HMM-based reconstruction method [23]. Mouse-specific covariates included age and sex, which were used to augment the set of genetic instruments. The full set of phenotypic biomarkers consisted of 47429 transcript levels, appropriately transformed and cleaned. Available data included 260 animals. Before applying our method, we decreased the dimensionality of the genetic features and RNA expressions by using a combination of seven feature (subset) selection methods, based on applications of filters, greedy (step-wise) regression, sequential approximations of the mutual information between the retained set and the outcome of interest, and applications of regression methods with LASSO and elastic net (EN) shrinkage priors for the genotypes g, observed biomarkers ?x, and observed HDL measurements ?y. For the LASSO and EN methods, global hyper-parameters were obtained by 10-fold cross-validation. Note that feature selection is unavoidable for genome-wide studies using gene expressions as biomarkers. Indeed, the considered case of ? O(105 ) instruments and 47K biomarkers would give rise to & O(109 ) interaction weights, which is expensive to analyze or even keep in memory. After applying subset selection methods, SPIV was typically applied to subsets of data with ? O(105 ) loci-biomarker interactions. The results of the SPIV analysis of this dataset are shown on Figure 3 (bottom). The bottom left plot shows maximum a-posteriori weights ?M AP computed by running the EM-like optimization procedure to convergence from 20 random initializations. For a model with latent variables and about 30, 000 weights, each run took approximately 10 minutes of execution time (only weakly optimized Matlab code, simple desktop). The parameters ?1,2 were obtained by 10-fold CV. Note that only a fraction of the variables remains in the posterior. In this case and for the considered sparseness-inducing priors, no hidden confounders appear to have strong effects on the outcome in the posterior1 . The spikes of the pleiotropic activations in sex chromosome 20 and around chromosome 1 are consistent with the biological knowledge [38]. The biomarker with the strongest direct effect on HDL (computed as the mean MAP weight wi : xi ? y divided by its standard deviation over multiple runs, where each mean weight exceeds a threshold) is the expression of Cyp27b1 (gene responsible for vitamin D metabolism). Knockout of the Cyp27b1 gene in mice has been shown to alter body fat stores [24], which might be expected to affect HDL cholesterol levels. Recently it has also been shown that quantitative trait locus for circulating vitamin D levels in humans includes a gene that codes for the enzyme that synthesizes cholesterol [1]. A subsequent comparison of 18 specific reverse, pleiotropic, and causal models for Cyp27b1, HDL, and the whole vector of retained genetic instruments (known to be causal by definition) showed a slightly stronger evidence in favor of the reverse hypothesis without latent confounders (with the ratio of Laplace approximations of the marginal likelihoods of reverse vs causal models of ? 1.95 ? 0.27). This is in contrast to the LCMS where the results are strongly affected by the choice of an instrument (Figure 1 right shows the results for Cyp27b1, HDL, and the same choice of instruments). To demonstrate an application to gene fine-mapping studies, Figure 3 (bottom right) shows the approximate mutual information I(xi , y|e = {age, sex}) between the underlying biomarkers and unobserved HDL levels expressed from the model at ?M AP . The mutual information takes into account not only the strength of the direct effect of xi on y, but also associations with the pleiotropic instruments, strengths of the pleiotropic effects, and dependencies between the instruments. Under the as-if Gaussian assumption, I(xi , yj |?M AP ) = log(?y2j ?x2i ) ? log(?y2j ?x2i ? ?y4j xi ), where 1/2 2 T (Uwj + wgj )k2 + k?1/2 ?y2j = k?gg z (Vwj + wzj )k + wj ?x wj + ?yj , (5) with the rest expressed analogously. Here ?gg ? R|g|?|g| is the empirical covariance of the instruments, wj ? R|x| , wzj ? R|z| , and wgj ? R|g| are the MAP weights of the couplings of yj with the biomarkers, confounders, and genetic instruments respectively. When the outcome is HDL, the majority of predictive transcripts are fine-mapped to a small region on chromosome 1 which includes Uap1, Rgs5, Apoa2, and Nr1i3. The informativeness of these genes about the HDL cholesterol cannot be inferred simply from correlations between the measured gene expression and HDL levels; for example, when ranked in accordance to ?2 (? xi , y?|age, sex), the top 4 genes have the rankings 1 No confounder effects in the posterior mode for the considered ?1,2 is specific to the considered mouse HDL dataset, which shows relatively strong correlations between the measured biomarkers and the outcome. An application of SPIV to proprietary human data for a study of effects of vitamins and calcium levels on colorectal cancer (which we are not yet allowed to publish) showed very strong effects of the latent confounders. 7 of 838, 961, 6284, and 65 respectively. The findings are also biologically plausible and consistent with high-profile biological literature (with associations between Apoa2 and HDL described in [38], and strong links of Rgs5 to a genomic region strongly associated with metabolic traits discussed in [5], while Nr1i3 and Uap1 are their neighbors on chromosome 1 within ? 1M bp). Note that the couplings are via the links with the pleiotropic genetic markers on chromosome 1. Adjusting for sex and age prior to performing feature selection and inference did not significantly change the results. The results reported here appear to be stable for different choices of feature selection methods, data adjustments, and algorithm runs. We note, however, that different results may potentially be obtained based on the choice of animal populations and/or processing of the biomarker (gene expression) measurements. Details of the data collection, microarray preprocessing, and feature selection, along with the detailed findings for other biomarkers and phenotypic outcomes will be made available online. Definitive confirmation of these relationships would require gene knock-out experiments. 5 Discussion and extensions In large-scale genetic and bio-medical studies, we are facing a practical task of reducing a huge set of candidate causes of complex traits to a more manageable subset of candidates where experimental control (such as gene knockout experiments or biomarker alternations) may be performed. SPIV performs the screening of interesting biomarker-phenotype and genotype-biomarker-phenotype associations by exploiting the maximum-a-posteriori inference in a sparse linear latent variable model. Additional screening is performed by comparing approximate marginal likelihoods of specific modeling hypotheses, including direct, reverse, and pleiotropic models with and without confounders, which (under the assumption of no ?prior equivalence?) may serve as an additional test of possible causation [21]. Intuitively, the approach is motivated by the observation that while independence of variables implies that they are not in a causal relation, a preference for an unconfounded causal model may indicate possible causality and require further controlled experiments. Technically, SPIV may be viewed as an extension of LASSO and elastic net regression which allows for latent variables and pleiotropic dependencies. While being particularly attractive for genetic studies, SPIV or its modifications may potentially be applied for addressing more general structure learning tasks. For example, when applied iteratively, SPIV may be used to guide search over richer model structures (where a greedy search over parent nodes is replaced by a continuous optimization problem which combines subset selection and regression in the presence of latent variables), which may be used for structure learning problems. Other extensions of the framework could involve hybrid (discrete- and real-valued) outcomes with nonlinear/nongaussian likelihoods. Also, as mentioned earlier, once sparse representations are produced by the MAP inference, it may be possible to utilize more accurate approximations of the inference applicable for the induced sparse structures [6]. Also note that sparse priors on the linear weights tend to give rise to sparse covariance matrices. A potentially interesting alternative may involve a direct estimation of conditional precision matrices with a sparse group penalty. While SPIV attempts to focus the attention on important biomarkers establishing strong direct associations with the phenotypes, modeling of the precisions may be used for filtering out unimportant factors (conditionally) independent of the outcome variables. Our future work will involve a direct estimation of the sparse conditional precision matrix ??1 xyz|g of the biomarkers, outcomes, and unmeasured confounders (given the instruments), through latent variable extensions of the recently proposed graphical LASSO and related methods [11, 18]. The key purpose of this paper is to draw attention of the machine learning community to the problem of inferring causal relationships between phenotypic measurements and complex traits (disease risks), which may have tremendous implications in epidemiology and systems biology. Our specific approach to the problem is inspired by the ideas of instrumental variable analysis commonly used in epidemiological studies, which we have extended to properly address situations when the genetic variables may be direct causes of the hypothesized outcomes. The sparse instrumental variable framework (SPIV) overcomes limitations of the likelihood-based LCMS methods often used by geneticists, by modeling joint effects of genetic loci and biomarkers in the presence of noise and latent variables. The approach is tractable enough to be used in genetic studies with tens of thousands of variables. It may be used for identifying specific genes associated with phenotypic outcomes, and may have wide applications in identification of biomarkers as possible targets for interventions, or as proxy endpoints for early-stage clinical trials. 8 References [1] J. Ahn, K. Yu, and R. Stolzenberg-Solomon et. al. Genome-wide association study of circulating vitamin D levels. Human Molecular Genetics, 2010. Epub ahead of print. [2] J. D. Angrist, G. W. Imbens, and D. B. Rubin. Identification of causal effects using instrumental variables (with discussion). J. of the Am. Stat. Assoc., 91:444?455, 1996. [3] R. J. Bowden and D. A. Turkington. Instrumental Variables. Cambridge Uni Press, 1984. [4] C. Brito and J. Pearl. Generalized instrumental variables. In UAI, 2002. [5] Y. Chen, J. Zhu, and P. Y. Lum et. al. Variations in DNA elucidate molecular networks that cause disease. Nature, 452:429?435, 2008. [6] B. Cseke and T. Heskes. Improving posterior marginal approximations in latent Gaussian models. In AISTATS, 2010. [7] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. The Ann. of Stat., 32, 2004. [8] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. J. of the Am. Stat. Assoc., 96(456):1348?1360, 2001. [9] M. Figueiredo. Adaptive sparseness for supervised learning. IEEE Trans. on PAMI, 25(9), 2003. [10] I. E. Frank and J. H. Friedman. A statistical view of some chemometrics regression tools. Technometrics, 35(2):109?135, 1993. [11] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3), 2008. [12] D. Heckerman, C. Meek, and G. F. Cooper. A Bayesian approach to causal discovery. In C. Glymour and G. F. Cooper, editors, Computation, Causation, and Discovery. MIT, 1999. [13] G. J. Huang, S. Shifman, and W. Valdar et. al. High resolution mapping of expression QTLs in heterogeneous stock mice in multiple tissues. Genome Research, 19(6):1133?40, 2009. [14] J. Jia and B. Yu. On model selection consistency of the elastic net when p ? n. Technical Report 756, UC Berkeley, Department of Statistics, 2008. [15] M. B. Katan. Apolipoprotein E isoforms, serum cholesterol and cancer. Lancet, i:507?508, 1986. [16] S. Kim and E. Xing. Statistical estimation of correlated genome associations to a quantitative trait network. PLOS Genetics, 5(8), 2009. [17] D. A. Lawlor, R. M. Harbord, and J. Sterne et. al. Mendelian randomization: using genes as instruments for making causal inferences in epidemiology. Stat. in Medicine, 27:1133?1163, 2008. [18] E. Levina, A. Rothman, and J. Zhu. Sparse estimation of large covariance matrices via a nested lasso penalty. The Ann. of App. Stat., 2(1):245?263, 2008. [19] M. H. Maathius, M. Kalisch, and P. Buhlmann. Estimating high-dimensional intervention effects from observation data. The Ann. of Stat., 37:3133?3164, 2009. [20] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4:415?447, 1992. [21] D. J. C. MacKay. Information Theory, Inference & Learning Algorithms. Cambridge Uni Press, 2003. [22] J. Mooij, D. Janzing, J. Peters, and B. Schoelkopf. Regression by dependence minimization and its application to causal inference in additive noise models. In ICML, 2009. [23] R. Mott, C. J. Talbot, M. G. Turri, A. C. Collins, and J. Flint. A method for fine mapping quantitative trait loci in outbred animal stocks. Proc. Nat. Acad. Sci. USA, 97:12649?12654, 2000. [24] C. J. Narvaez and D. Matthews et. al. Lean phenotype and resistance to diet-induced obesity in vitamin D receptor knockout mice correlates with induction of uncoupling protein-1. Endocrinology, 150(2), 2009. [25] R. M. Neal. Bayesian Learning for Neural Networks. Springer, 1996. [26] T. Park and G. Casella. The Bayesian LASSO. J. of the Am. Stat. Assoc., 103(482), 2008. [27] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge Uni Press, 2000. [28] J. Pearl. Causal inference in statistics: an overview. Statistics Surveys, 3:96?146, 2009. [29] J. M. Robins and S. Greenland. Identification of causal effects using instrumental variables: comment. J. of the Am. Stat. Assoc., 91:456?458, 1996. [30] E. E. Schadt, J. Lamb, X. Yang, and J. Zhu et. al. An integrative genomics approach to infer causal associations between gene expression and disease. Nature Genetics, 37(7):710?717, 2005. [31] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model. JMLR, 9, 2008. [32] I. Shpitser and J. Pearl. Identification of conditional interventional distributions. In UAI, 2006. [33] R. Silva, R. Scheines, C. Glymour, and P. Spirtes. Learning the structure of linear latent variable models. JMLR, 7, 2006. [34] G. D. Smith and S. Ebrahim. Mendelian randomisation: can genetic epidemiology contribute to understanding environmental determinants of disease? Int. J. of Epidemiology, 32:1?22, 2003. [35] D.C. Thomas and D.V. Conti. Commentary: The concept of Mendelian randomization. Int. J. of Epidemiology, 32, 2004. [36] R. Tibshirani. Regression shrinkage and selection via the lasso. JRSS B, 58(1):267?288, 1996. [37] M. E. Tipping. Sparse Bayesian learning and the RVM. JMLR, 1:211?244, 2001. [38] W. Valdar, L. C. Solberg, and S. Burnett et. al. Genome-wide genetic association of complex traits in heterogeneous stock mice. Nature Genetics, 38:879?887, 2006. [39] M. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using L1-constrained quadratic programmming. IEEE Trans. on Inf. Theory, 55:2183 ? 2202, 2007. [40] M. Yuan and Y. Lin. On the nonnegative garrote estimator. JRSS:B, 69, 2007. [41] J. Zhu, M. C. Wiener, and C. Zhang et. al. Increasing the power to detect causal associations by combining genotypic and expression data in segregating populations. PLOS Comp. Biol., 3(4):692?703, 2007. [42] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. JRSS:B, 67(2), 2005. 9
3976 |@word trial:3 determinant:1 manageable:2 instrumental:18 stronger:3 loading:1 proportion:1 sex:5 heuristically:1 integrative:1 simulation:1 covariance:6 accounting:1 score:8 selecting:1 genetic:26 recovered:1 com:1 comparing:4 z2:1 activation:1 yet:1 subsequent:2 additive:1 wx:1 remove:1 plot:3 update:1 aside:1 discrimination:1 greedy:3 v:1 metabolism:1 parameterization:2 desktop:1 isotropic:1 smith:1 node:2 contribute:1 lx:2 preference:3 zhang:1 five:1 along:1 constructed:1 direct:10 yuan:1 fitting:1 combine:1 uncoupling:1 indeed:1 expected:3 multi:1 highdensity:1 inspired:1 csl:1 increasing:1 estimating:1 moreover:1 underlying:2 biostatistics:1 interpreted:1 parsimony:1 developed:2 loos:1 unobserved:5 unified:1 finding:3 temporal:1 quantitative:7 berkeley:1 collecting:1 fat:1 k2:1 assoc:4 uk:3 bio:3 control:2 medical:3 intervention:5 appear:2 knockout:3 kalisch:1 before:1 felix:1 understood:1 accordance:2 influencing:1 severely:2 acad:1 receptor:1 oxford:1 establishing:1 interpolation:1 ap:7 approximately:1 might:1 pami:1 therein:2 initialization:1 examined:1 equivalence:4 specifying:1 limited:1 confounder:1 bi:2 unique:1 practical:4 responsible:1 testing:1 yj:3 epidemiological:2 procedure:3 empirical:6 drug:1 significantly:1 projection:1 pre:1 bowden:1 suggest:1 dbt:1 protein:1 cannot:3 convenience:1 selection:15 risk:2 impossible:1 influence:2 context:1 applying:2 optimize:1 equivalent:3 imposed:1 map:8 center:2 missing:1 mrna:1 straightforward:1 starting:1 attention:2 serum:1 focused:1 resolution:1 survey:1 identifying:2 recovery:1 estimator:1 y2j:3 cholesterol:7 classic:3 population:3 handle:1 unmeasured:3 variation:3 laplace:3 target:2 play:1 elucidate:1 exact:1 akaike:1 distinguishing:1 hypothesis:6 storkey:2 expensive:1 particularly:2 agakov:1 lean:1 observed:6 bottom:6 role:1 capture:1 thousand:1 region:4 wj:3 schoelkopf:1 triad:2 plo:2 disease:5 intuition:1 mentioned:1 complexity:1 covariates:3 moderately:1 ui:2 weakly:1 solving:3 predictive:4 serve:1 upon:1 technically:1 easily:5 joint:5 stock:8 various:2 effective:1 detected:1 artificial:2 tell:1 hyper:2 outcome:32 exhaustive:1 quite:1 richer:2 widely:1 plausible:3 larger:2 heuristic:1 relax:2 valued:1 wg:4 favor:2 cov:2 gi:12 statistic:3 jointly:1 noisy:3 online:1 mg:2 net:5 took:1 reconstruction:1 interaction:2 product:1 neighboring:1 combining:1 defended:1 inducing:5 chemometrics:1 exploiting:2 convergence:1 cluster:1 requirement:1 parent:1 depending:3 coupling:4 ac:3 stat:8 liver:3 measured:2 school:1 transcript:17 strong:9 auxiliary:1 indicate:2 implies:1 direction:2 correct:1 filter:1 subsequently:1 centered:2 human:3 observational:1 public:2 require:2 randomization:4 biological:3 rothman:1 extension:6 sufficiently:1 considered:7 ground:1 normal:1 exp:2 around:1 mapping:4 slab:1 matthew:1 major:1 early:2 purpose:1 estimation:7 proc:1 applicable:3 rvm:1 tool:1 amos:1 minimization:1 mit:1 genomic:1 gaussian:4 rna:1 modified:1 rather:1 dck:1 shrinkage:3 unconfounded:5 varying:1 cseke:1 ax:1 focus:3 derived:1 properly:1 biomarker:21 likelihood:18 contrast:4 rigorous:1 seeger:1 kim:1 detect:1 am:4 posteriori:4 inference:20 dependent:1 typically:2 integrated:1 hidden:4 relation:4 transformed:1 arg:1 flexible:2 augment:1 favored:2 extraneous:1 development:1 animal:3 constrained:1 integration:1 mackay:3 mutual:5 special:1 uc:1 once:2 marginal:7 sampling:1 stratification:1 biology:4 identical:2 park:1 yu:2 icml:1 nearly:1 jon:2 peaked:1 alter:1 future:1 report:1 fundamentally:1 causation:5 gamma:1 resulted:1 replaced:1 hdl:20 attempt:1 friedman:2 detection:1 technometrics:1 screening:3 interest:1 huge:1 genotype:12 implication:2 accurate:1 edge:1 prkar2b:1 unless:2 conduct:1 iv:3 filled:1 abundant:1 causal:43 instance:3 modeling:10 earlier:1 lawlor:1 addressing:4 subset:6 rare:1 deviation:1 examining:1 mendelian:6 motivating:1 reported:1 dependency:4 my:3 confounders:17 epidemiology:7 fundamental:1 probabilistic:1 informatics:1 analogously:2 mouse:12 brito:1 nongaussian:1 satisfied:2 unavoidable:1 solomon:1 huang:1 shpitser:1 krohn:2 li:1 account:1 potential:1 exclude:1 felixa:1 student:1 includes:3 coefficient:1 matter:1 agl:1 int:2 ranking:1 vi:1 performed:3 view:3 lab:1 closed:1 analyze:1 xing:1 bayes:2 complicated:2 identifiability:2 jia:1 wgj:2 isoforms:1 variance:5 largely:3 who:2 wiener:1 correspond:2 outweighed:1 bayesian:13 raw:1 identification:4 produced:2 burnett:1 comp:1 straight:2 tissue:1 app:1 implausible:1 banded:1 strongest:2 janzing:1 casella:1 ed:2 definition:1 associated:3 mi:1 sampled:1 flint:1 dataset:2 treatment:1 adjusting:1 knowledge:1 ut:1 dimensionality:2 efron:1 agreed:1 appears:1 higher:2 tipping:1 supervised:2 follow:1 diet:1 evaluated:3 ox:1 though:1 shrink:1 strongly:4 stage:2 correlation:7 nonlinear:1 marker:5 propagation:2 mode:4 grows:1 facilitate:1 effect:36 hypothesized:2 consisted:1 y2:2 true:1 usa:1 former:1 regularization:1 adequately:1 analytically:1 concept:1 narvaez:1 iteratively:1 spirtes:1 neal:1 deal:1 attractive:2 conditionally:1 indistinguishable:1 uniquely:1 encourages:1 criterion:2 generalized:1 gg:2 theoretic:2 demonstrate:2 argues:1 performs:1 l1:1 silva:1 reasoning:1 wise:1 recently:2 common:4 arbitrariness:1 empirically:1 overview:1 endpoint:2 association:21 tail:1 interpretation:1 linking:1 trait:17 numerically:1 discussed:1 measurement:13 cambridge:3 ai:2 cv:1 automatic:1 consistency:2 heskes:1 lcm:8 hxi:1 stable:1 longer:1 ahn:1 etc:2 gt:3 enzyme:1 posterior:11 recent:1 confounding:2 perspective:1 showed:3 irrelevant:2 inf:1 reverse:14 termed:1 store:1 vt:1 alternation:1 exploited:1 additional:3 lowerdimensional:1 commentary:1 ey:2 ii:2 semi:1 multiple:9 full:3 infer:5 reduces:2 exceeds:1 technician:1 faster:1 determination:1 vitamin:5 clinical:3 long:1 offer:1 cross:3 divided:1 lin:1 post:1 molecular:2 controlled:4 laplacian:1 regression:12 heterogeneous:6 expectation:3 publish:1 histogram:1 grounded:1 represent:1 sequenced:1 achieved:1 penalize:1 fine:4 decreased:1 diagram:1 leaving:1 microarray:2 appropriately:1 biased:1 extra:1 rest:2 comment:1 induced:3 tend:5 validating:1 nonconcave:1 yang:1 uki:1 presence:4 mbp:1 exceed:1 iii:4 easy:1 intermediate:3 enough:2 xj:9 fit:3 bic:3 affect:1 independence:1 lasso:9 hastie:3 reduce:1 idea:2 intensive:1 bottleneck:1 biomarkers:38 motivated:3 whether:2 expression:12 angrist:1 linkage:1 penalty:5 peter:1 resistance:1 cause:8 matlab:1 proprietary:1 generally:2 clear:2 unimportant:2 colorectal:1 detailed:1 involve:3 ten:1 induces:1 dna:1 generate:1 exist:2 restricts:1 estimated:1 tibshirani:3 discrete:1 shall:1 affected:1 group:2 key:3 four:1 segregating:1 threshold:2 interventional:2 clarity:2 preprocessed:1 phenotypic:9 verified:1 utilize:1 vast:1 graph:2 fraction:1 run:4 inverse:3 angle:1 extends:3 family:1 almost:1 lamb:1 draw:1 allowance:1 garrote:1 meek:1 aic:6 fold:2 fan:1 quadratic:1 identifiable:2 oracle:1 nonnegative:1 strength:3 ahead:1 bp:1 x2:4 nwi:2 randomisation:1 argument:1 breeding:1 pruned:1 performing:1 relatively:3 glymour:2 department:1 developing:2 combination:1 jr:3 describes:1 slightly:2 em:3 heckerman:1 wi:2 founder:1 rev:1 making:2 biologically:1 modification:1 imbens:1 intuitively:1 confound:2 handling:1 levina:1 recourse:1 computationally:2 equation:2 scheines:1 remains:1 xyz:1 needed:1 locus:12 tractable:3 instrument:28 confounded:1 studying:1 available:5 gaussians:1 generalizes:1 appropriate:1 generic:1 sterne:1 alternative:3 robustness:1 existence:1 thomas:1 top:5 running:1 ensure:2 include:1 graphical:3 marginalized:1 log10:3 medicine:1 exploit:2 giving:1 restrictive:1 perturb:2 build:3 establish:1 classical:1 question:1 print:1 spike:2 concentration:1 dependence:1 traditional:1 diagonal:2 mx:7 link:7 separate:1 simulated:1 mapped:1 majority:2 hmm:1 sci:1 seven:1 extent:1 collected:1 induction:1 code:2 retained:6 relationship:4 modeled:3 ratio:1 technical:1 difficult:2 potentially:4 frank:1 debate:2 gk:1 rise:3 design:2 motivates:1 calcium:1 unknown:1 adjustable:2 allowing:4 observation:8 markov:2 datasets:2 situation:2 extended:4 hinton:1 rn:3 sharp:1 community:1 buhlmann:1 overcoming:1 inferred:4 pair:1 cleaned:1 specified:4 kl:2 z1:1 philosophical:1 optimized:1 tremendous:1 pearl:5 trans:4 address:2 qtls:2 wi2:1 laplacians:1 sparsity:1 genotypic:9 wz:3 max:1 explanation:3 belief:1 memory:1 including:1 suitable:1 wainwright:1 ranked:1 hybrid:1 power:1 zhu:4 historically:1 x2i:3 circulating:2 categorical:1 mediated:2 obesity:1 health:2 speeding:1 lum:1 genomics:1 prior:27 literature:2 discovery:4 understanding:1 mooij:1 determining:1 relative:1 law:1 interesting:3 limitation:3 filtering:2 generation:1 var:2 facing:1 age:4 validation:4 foundation:1 proxy:3 consistent:4 informativeness:1 rubin:1 metabolic:1 editor:1 lancet:1 share:1 heavy:1 row:1 cancer:2 genetics:8 penalized:3 accounted:1 figueiredo:1 bias:1 allow:3 formal:1 guide:1 johnstone:1 wide:9 neighbor:1 greenland:1 sparse:25 edinburgh:3 dimension:3 genome:10 forward:1 commonly:2 adaptive:2 collection:1 simplified:1 wzt:1 preprocessing:1 made:1 correlate:1 approximate:4 pruning:1 uni:3 gene:22 dealing:1 keep:1 global:3 overcomes:1 uai:2 assumed:2 xi:11 conti:1 search:3 latent:28 ancestry:1 continuous:1 robin:1 additionally:1 nature:4 chromosome:6 correlated:1 confirmation:1 elastic:5 contributes:1 improving:1 synthesizes:1 excellent:1 complex:4 zou:1 domain:1 kindly:1 did:1 aistats:1 arrow:1 whole:1 noise:10 paul:2 hyperparameters:3 profile:1 nothing:1 definitive:1 allowed:1 body:1 causality:7 representative:1 en:2 cooper:2 precision:3 inferring:2 explicit:1 lq:1 candidate:6 crude:1 jmlr:3 theorem:1 minute:1 specific:10 explored:1 talbot:1 evidence:3 grouping:2 intractable:1 wzj:2 sequential:1 schadt:4 gained:1 ci:2 importance:1 magnitude:2 execution:1 nat:1 sparseness:6 knock:1 chen:1 phenotype:6 endocrinology:1 simply:1 likely:1 expressed:5 ordered:1 adjustment:1 partially:3 springer:1 nested:1 truth:1 environmental:4 relies:1 lwi:3 wgt:1 conditional:5 viewed:2 presentation:1 ann:3 epub:1 shared:1 change:2 included:3 determined:5 typical:2 reducing:1 acting:3 wt:1 experimental:2 plasma:3 tendency:1 select:2 geneticist:1 support:1 collins:1 relevance:1 incorporate:1 mcmc:1 tested:1 biol:1 ex:1
3,286
3,977
Learning from Logged Implicit Exploration Data Alexander L. Strehl ? Facebook Inc. 1601 S California Ave Palo Alto, CA 94304 [email protected] John Langford Yahoo! Research 111 West 40th Street, 9th Floor New York, NY, USA 10018 [email protected] Sham M. Kakade Department of Statistics University of Pennsylvania Philadelphia, PA, 19104 [email protected] Lihong Li Yahoo! Research 4401 Great America Parkway Santa Clara, CA, USA 95054 [email protected] Abstract We provide a sound and consistent foundation for the use of nonrandom exploration data in ?contextual bandit? or ?partially labeled? settings where only the value of a chosen action is learned. The primary challenge in a variety of settings is that the exploration policy, in which ?offline? data is logged, is not explicitly known. Prior solutions here require either control of the actions during the learning process, recorded random exploration, or actions chosen obliviously in a repeated manner. The techniques reported here lift these restrictions, allowing the learning of a policy for choosing actions given features from historical data where no randomization occurred or was logged. We empirically verify our solution on two reasonably sized sets of real-world data obtained from Yahoo!. 1 Introduction Consider the advertisement display problem, where a search engine company chooses an ad to display which is intended to interest the user. Revenue is typically provided to the search engine from the advertiser only when the user clicks on the displayed ad. This problem is of intrinsic economic interest, resulting in a substantial fraction of income for several well-known companies such as Google, Yahoo!, and Facebook. Before discussing the proposed approach, we formalize the problem and then explain why more conventional approaches can fail. The warm-start problem for contextual exploration: Let X be an arbitrary input space, and A = {1, . . . , k} be a set of actions. An instance of the contextual bandit problem is specified by a distribution D over tuples (x, ~r) where x ? X is an input and ~r ? [0, 1]k is a vector of rewards [6]. Events occur on a round-by-round basis where on each round t: 1. The world draws (x, ~r ) ? D and announces x. 2. The algorithm chooses an action a ? A, possibly as a function of x and historical information. 3. The world announces the reward ra of action a, but not ra? for a? 6= a. ? Part of this work was done while A. Strehl was at Yahoo! Research. 1 It is critical to understand that this is not a standard supervised-learning problem, because the reward of other actions a? 6= a is not revealed. The standard goal in this setting is to maximize the sum of rewards ra over the rounds of interaction. In order to do this well, it is essential to use previously recorded events to form a good policy on the first round of interaction. Thus, this is a ?warm start? problem. Formally, given a dataset of the form S = (x, a, ra )? generated by the interaction of an uncontrolled logging policy, we want to construct a policy h maximizing (either exactly or approximately) V h := E(x,~r)?D [rh(x) ]. Approaches that fail: There are several approaches that may appear to solve this problem, but turn out to be inadequate: 1. Supervised learning. We could learn a regressor s : X ? A ? [0, 1] which is trained to predict the reward, on observed events conditioned on the action a and other information x. From this regressor, a policy is derived according to h(x) = argmaxa?A s(x, a). A flaw of this approach is that the argmax may extend over a set of choices not included in the training data, and hence may not generalize at all (or only poorly). This can be verified by considering some extreme cases. Suppose that there are two actions a and b with action a occurring 106 times and action b occuring 102 times. Since action b occurs only a 10?4 fraction of the time, a learning algorithm forced to trade off between predicting the expected value of ra and rb overwhelmingly prefers to estimate ra well at the expense of accurate estimation for rb . And yet, in application, action b may be chosen by the argmax. This problem is only worse when action b occurs zero times, as might commonly occur in exploration situations. 2. Bandit approaches. In the standard setting these approaches suffer from the curse of dimensionality, because they must be applied conditioned on X. In particular, applying them requires data linear in X ? A, which is extraordinarily wasteful. In essence, this is a failure to take advantage of generalization. 3. Contextual Bandits. Existing approaches to contextual bandits such as EXP4 [1] or Epoch Greedy [6], require either interaction to gather data or require knowledge of the probability the logging policy chose the action a. In our case the probability is unknown, and it may in fact always be 1. 4. Exploration Scavenging. It is possible to recover exploration information from action visitation frequency when a logging policy chooses actions independent of the input x (but possibly dependent on history) [5]. This doesn?t fit our setting, where the logging policy is surely dependent on the input. 5. Propensity Scores, naively. When conducting a survey, a question about income might be included, and then the proportion of responders at various income levels can be compared to census data to estimate a probability conditioned on income that someone chooses to partake in the survey. Given this estimated probability, results can be importance-weighted to estimate average survey outcomes on the entire population [2]. This approach is problematic here, because the policy making decisions when logging the data may be deterministic rather than probabilistic. In other words, accurately predicting the probability of the logging policy choosing an ad implies always predicting 0 or 1 which is not useful for our purposes. Although the straightforward use of propensity scores does not work, the approach we take can be thought of as as a more clever use of a propensity score, as discussed below. Lambert and Pregibon [4] provide a good explanation of propensity scoring in an Internet advertising setting. Our Approach: The approach proposed in the paper naturally breaks down into three steps. 1. For each event (x, a, ra ), estimate the probability ? ? (a|x) that the logging policy chooses action a using regression. Here, the ?probability? is over time?we imagine taking a uniform random draw from the collection of (possibly deterministic) policies used at different points in time. 2. For each event (x, a, ra ), create a synthetic controlled contextual bandit event according to (x, a, ra , 1/ max{? ?(a|x), ? }) where ? > 0 is some parameter. The quantity, 1/ max{? ? (a|x), ? }, is an importance weight that specifies how important the current event is for training. As will be clear, the parameter ? is critical for numeric stability. 2 3. Apply an offline contextual bandit algorithm to the set of synthetic contextual bandit events. In our second set of experimental results (Section 4.2) a variant of the argmax regressor is used with two critical modifications: (a) We limit the scope of the argmax to those actions with positive probability; (b) We importance weight events so that the training process emphasizes good estimation for each action equally. It should be emphasized that the theoretical analysis in this paper applies to any algorithm for learning on contextual bandit events?we chose this one because it is a simple modification on existing (but fundamentally broken) approaches. The above approach is most similar to the Propensity Score approach mentioned above. Relative to it, we use a different definition of probability which is not necessarily 0 or 1 when the logging policy is completely deterministic. Three critical questions arise when considering this approach. 1. What does ? ? (a|x) mean, given that the logging policy may be deterministically choosing an action (ad) a given features x? The essential observation is that a policy which deterministically chooses action a on day 1 and then deterministically chooses action b on day 2 can be treated as randomizing between actions a and b with probability 0.5 when the number of events is the same each day, and the events are IID. Thus ? ? (a|x) is an estimate of the expected frequency with which action a would be displayed given features x over the timespan of the logged events. In section 3 we show that this approach is sound in the sense that in expectation it provides an unbiased estimate of the value of new policy. 2. How do the inevitable errors in ? ? (a|x) influence the process? It turns out they have an effect which is dependent on ? . For very small values of ? , the estimates of ? ? (a|x) must be extremely accurate to yield good performance while for larger values of ? less accuracy is required. In Section 3.1, we prove this robustness property. 3. What influence does the parameter ? have on the final result? While creating a bias in the estimation process, it turns out that the form of this bias is mild and relatively reasonable? actions which are displayed with low frequency conditioned on x effectively have an underestimated value. This is exactly as expected for the limit where actions have no frequency. In section 3.1 we prove this. We close with a generalization from policy evaluation to policy selection with a sample complexity bound in section 3.2 and then experimental results in section 4 using real data. 2 Formal Problem Setup and Assumptions Let ?1 , ..., ?T be T policies, where, for each t, ?t is a function mapping an input from X to a (possibly deterministic) distribution over A. The learning algorithm is given a dataset of T samples, each of the form (x, a, ra ) ? X ? A? [0, 1], where (x, r) is drawn from D as described in Section 1, and the action a ? ?t (x) is chosen according to the tth policy. We denote this random process by (x, a, ra ) ? (D, ?t (?|x)). Similarly, interaction with the T policies results in a sequence S of T samples, which we denote S ? (D, ?i (?|x))Ti=1 . The learner is not given prior knowledge of the ?t . Offline policy estimator: Given a dataset of the form (1) S = {(xt , at , rt,at )}Tt=1 , ? : X ? A ? [0, 1] and then use it where ?t, xt ? X, at ? A, rt,at ? [0, 1], we form a predictor ? with a threshold ? ? [0, 1] to form an offline estimator for the value of a policy h. Formally, given a new policy h : X ? A and a dataset S, define the estimator: X 1 ra I(h(x) = a) V???h (S) = , |S| max{? ?(a|x), ? } (2) (x,a,r)?S where I(?) denotes the indicator function. The shorthand V???h will be used if there is no ambiguity. The purpose of ? is to upper-bound the individual terms in the sum and is similar to previous methods like robust importance sampling [10]. The purpose of ? is to upper-bound the individual terms in the sum and is similar to previous methods like robust importance sampling [10]. 3 3 Theoretical Results We now present our algorithm and main theoretical results. The main idea is twofold: first, we have a policy estimation step, where we estimate the (unknown) logging policy (Subsection 3.1); second, we have a policy optimization step, where we utilize our estimated logging policy (Subsection 3.2). Our main result, Theorem 3.2, provides a generalization bound?addressing the issue of how both the estimation and optimization error contribute to the total error. The logging policy ?t may be deterministic, implying that conventional approaches relying on randomization in the logging policy are not applicable. We show next that this is ok when the world is IID and the policy varies over its actions. We effectively substitute the standard approach of randomization in the algorithm for randomization in the world. A basic claim is that the estimator is equivalent, in expectation, to a stochastic policy defined by: ?(a|x) = Et?UNIF(1,...,T ) [?t (a|x)], (3) where UNIF(? ? ? ) denotes the uniform distribution. The stochastic policy ? chooses an action uniformly at random over the T policies ?t . Our first result is that the expected value of our estimator is the same when the world chooses actions according to either ? or to the sequence of policies ?t . Although this result and its proof are straightforward, it forms the basis for the rest of the results in our paper. Note that the policies ?t may be arbitrary but we have assumed that they do not depend on the data used for evaluation. This assumption is only necessary for the proofs and can often be relaxed in practice, as we show in Section 4.1. Theorem 3.1. For any contextual bandit problem D with identical draws over T rounds, for any sequence of possibly stochastic policies ?t (a|x) with ? derived as above, and for any predictor ? ?, r I(h(x) = a) a ES?(D,?i (?|x))Ti=1 V???h (S) = E(x,~r)?D,a??(?|x) (4) max{? ? (a|x), ? } This theorem relates the expected value of our estimator when T policies are used to the much simpler and more standard setting where a single fixed stochastic policy is used. 3.1 Policy Estimation In this section we show that for a suitable choice of ? and ? ? our estimator is sufficiently accurate for evaluating new policies h. We aggressively use the simplification of the previous section, which shows that we can think of the data as generated by a fixed stochastic policy ?, i.e. ?t = ? for all t. For a given estimate ? ? of ? define the ?regret? to be a function reg : X ? [0, 1] by   reg(x) = max (?(a|x) ? ? ? (a|x))2 . a?A (5) We do not use ?1 or ?? loss above because they are harder to minimize than ?2 loss. Our next result is that the new estimator is consistent. In the following theorem statement, I(?) denotes the indicator function, ?(a|x) the probability that the logging policy chooses action a on input x, and V???h our estimator as defined by Equation 2 based on parameter ? . Lemma 3.1. Let ? ? be any function from X to distributions over actions A. Let h : X ? A be any deterministic policy. Let V h (x) = Er?D(?|x) [rh(x) ] denote the expected value of executing policy h on input x. We have that " Ex I(?(h(x)|x) ? ? ) ? h V (x) ? p reg(x) ? !# # " p reg(x) h h ? ? E[V?? ] ? V + Ex I(?(h(x)|x) ? ? ) ? . ? In the above, the expectation E[V???h ] is taken over all sequences of T tuples (x, a, r) where (x, r) ? D and a ? ?(?|x).1 This lemma bounds the bias in our estimate of V h (x). There are two sources of bias?one from the error of ? ? (a|x) in estimating ?(a|x), and the other from threshold ? . For the first source, it?s crucial that we analyze the result in terms of the squared loss rather than (say) ?? loss, as reasonable sample complexity bounds on the regret of squared loss estimates are achievable.2 1 Note that varying T does not change the expectation of our estimator, so T has no effect in the theorem. Extending our results to log loss would be interesting future work, but is made difficult by the fact that log loss is unbounded. 2 4 Lemma 3.1 shows that the expected value of our estimate V??h of a policy h is an approximation to a lower bound of the true value of the policy h where the approximation is due to errors in the estimate ? ? and the lower bound is due to the threshold ? . When ? ? = ?, then the statement of Lemma 3.1 simplifies to   Ex I(?(h(x)|x) ? ? ) ? V h (x) ? E[V???h ] ? V h . Thus, with a perfect predictor of ?, the expected value of the estimator V???h is a guaranteed lower bound on the true value of policy h. However, as the left-hand-side of this statement suggests, it may be a very loose bound, especially if the action chosen by h often has a small probability of being chosen by ?. The dependence on 1/? in Lemma 3.1 is somewhat unsettling, but unavoidable. Consider an instance of the bandit problem with a single input x and two actions a1 , a2 . Suppose that ?(a1 |x) = ? + ? for some positive ? and h(x) = a1 is the policy we are evaluating. Suppose further that the rewards are always 1 and that ? ? (a1 |x) = ? . Then, the estimator satisfies E[V???h ] = ?(a1 |x)/? ? (a1 |x) = (? + ?)/? . Thus, the expected error in the estimate is E[V???h ] ? V h = |(? + ?)/? ? 1| = ?/? , while the regret of ? ? is (?(a1 |x) ? ? ? (a1 |x))2 = ?2 . 3.2 Policy Optimization The previous section proves that we can effectively evaluate a policy h by observing a stochastic policy ?, as long as the actions chosen by h have adequate support under ?, specifically ?(h(x)|x) ? ? for all inputs x. However, we are often interested in choosing the best policy h from a set of policies H after observing logged data. Furthermore, as described in Section 2, the logged data are generated from T fixed, possibly deterministic, policies ?1 , . . . , ?T as described in section 2 rather than a single stochastic policy. As in Section 3 we define the stochastic policy ? by Equation 3, ?(a|x) = Et?UNIF(1,...,T ) [?t (a|x)] The results of Section 3.1 apply to the policy optimization problem. However, note that the data are now assumed to be drawn from the execution of a sequence of T policies ?1 , . . . , ?T , rather than by T draws from ?. Next, we show that it is possible to compete well with the best hypothesis in H that has adequate support under ? (even though the data are not generated from ?). Theorem 3.2. Let ? ? be any function from X to distributions over actions A. Let H be any set of de? = argmax ? {V h }. ? = {h ? H | ?(h(x)|x) > ?, ? x ? X} and h terministic policies. Define H h?H h ? ? Let h = argmaxh?H {V?? } be the hypothesis that maximizes the empirical value estimator defined in Equation 2. Then, with probability at least 1 ? ?, ! r 2 p ln(2|H|/?) ? ? h h V ?V ? Ex [reg(x)] + , (6) ? 2T where reg(x) is defined, with respect to ?, in Equation 5. The proof of Theorem 3.2 relies on the lower-bound property of our estimator (the left-hand side of Inequality stated in Lemma 3.1). In other words, if H contains a very good policy that has little support under ?, we will not be able to detect that by our estimator. On the other hand, our estimation is safe in the sense that we will never drastically overestimate the value of any policy in H. This ?underestimate, but don?t overestimate? property is critical to the application of optimization techniques, as it implies we can use an unrestrained learning algorithm to derive a warm start policy. 4 Empirical Evaluation We evaluated our method on two real-world datasets obtained from Yahoo!. The first dataset consists of uniformly random exploration data, from which an unbiased estimate of any policy can be obtained. This dataset is thus used to verify accuracy of our offline evaluator (2). The second dataset then demonstrates how policy optimization can be done from nonrandom offline data. 5 4.1 Experiment I The first experiment involves news article recommendation in the ?Today Module?, on the Yahoo! front page. For every user visit, this module displays a high-quality news article out of a small candidate pool, which is hand-picked by human editors. The pool contains about 20 articles at any given time. We seek to maximize the click probability (aka click-through rate, or CTR) of the highlighted article. This problem is modeled as a contextual bandit problem, where the context consists of both user and article information, the arms correspond to articles, and the reward of a displayed article is 1 if there is a click and 0 otherwise. Therefore, the value of a policy is exactly its overall CTR. To protect business-sensitive information, we only report normalized CTR (nCTR) which is defined as the ratio of the true CTR and the CTR of a random policy. Our dataset, denoted D0 , was collected from real traffic on the Yahoo! front page during a twoweek period in June 2009. It contains T = 64.7M events in the form of triples (x, a, r), where the context x contains user/article features, arm a was chosen uniformly at random from a dynamic candidate pool A, and r is a binary reward signal indicating whether the user clicked on a. Since actions are chosen randomly, we have ? ? (a|x) = ?(a|x) ? 1/|A| and reg(x) ? 0. Consequently, Lemma 3.1 implies E[V???h ] = V h provided ? < 1/|A|. Furthermore, a straightforward?application of Hoeffding?s inequality guarantees that V???h concentrates to V h at the rate of O(1/ T ) for any policy h, which is also verified empirically [9]. Given the size of our dataset, therefore, we used this dataset to calculate V?0 = V???h using ? ? (a|x) = 1/|A| in (2). The result V?0 was then treated as ?ground truth?, with which we can evaluate how accurate the offline evaluator (2) is when non-random log data are used instead. To obtain non-random log data, we ran the LinUCB algorithm using the offline bandit simulation procedure, both from [8], on our random log data D0 and recorded events (x, a, r) for which LinUCB chose arm a for context x. Note that ? is a deterministic learning algorithm, and may choose different arms for the same context at different timesteps. We call this subset of recorded events D? . It is known that the set of recorded events has the same distribution as if we ran LinUCB on real user visits to Yahoo! front page. We used D? as non-random log data and do evaluation. To define the policy h for evaluation, we used D0 to estimate each article?s overall CTR across all users, and then h was defined as selecting the article with highest estimated CTR. We then evaluated h on D? using the offline evaluator (2). Since the set A of articles changes over time (with news articles being added and old articles retiring), ?(a|x) is very small due to the large number of articles over the two-week period, resulting in large variance. To resolve this problem, we split the dataset D? into subsets so that in each subset the candidate pool remains constant,3 and then estimate ?(a|x) for each subset separately using ridge regression on features x. We note that more advanced conditional probability estimation techniques can be used. Figure 1 plots V???h with varying ? against the ground truth V?0 . As expected, as ? becomes larger, our estimate can become more (downward) biased. For a large range of ? values, our estimates are reasonably accurate, suggesting the usefulness of our proposed method. In contrast, a naive approach, which assumes ?(a|x) = 1/|A|, gives a very poor estimate of 2.4. For extremely small values of ? , however, there appears to be a consistent trend of over-estimating the policy value. This is due to the fact that negative moments of a positive random variable are often larger than the corresponding moments of its expectation [7]. Note that the logging policy we used, ?, violates one of the assumptions used to prove Lemma 3.1, namely that the exploration policy at timestep t not be dependent on an earlier event. Our offline evaluator is accurate in this setting, which suggests that the assumption may be relaxable in practice. 4.2 Experiment II In the second experiment, we investigate our approach to the warm-start problem. The dataset was provided by Yahoo!, covering a period of one month in 2008. The data are comprised of logs of events (x, a, y), where each event represents a visit by a user to a particular web page x, from a set of web pages X. From a large set of advertisements A, the commercial system chooses a single ad 3 We could do so because we know A for every event in D0 . 6 1.7 Method Learned Random Learned Random Naive nCTR 1.6 1.5 1.4 1.3 1E?4 offline estimate ground truth 1E?3 1E?2 ? 1E?1 ? 0.01 0.01 0.05 0.05 0.05 Estimate 0.0193 0.0154 0.0132 0.0111 0.0 Interval [0.0187,0.0206] [0.0149,0.0166] [0.0129,0.0137] [0.0109,0.0116] [0,0.0071] Figure 2: Results of various algorithms on the ad display dataset. Note these numbers were computed using a not-necessarily-uniform sample of data. Figure 1: Accuracy of offline evaluator with varying ? values. a for the topmost, or most prominent position. It also chooses additional ads to display, but these were ignored in our test. The output y is an indicator of whether the user clicked on the ad or not. The total number of ads in the data set is approximately 880, 000. The training data consist of 35 million events. The test data contain 19 million events occurring after the events in the training data. The total number of distinct web pages is approximately 3.4 million. We trained a policy h to choose an ad, based on the current page, to maximize the probability of click. For the purposes of learning, each ad and page was represented internally as a sparse highdimensional feature vector. The features correspond to the words that appear in the page or ad, weighted by the frequency with which they appear. Each ad contains, on average, 30 ad features and each page, approximately 50 page features. The particular form of f was linear over features of its input (x, a)4 The particular policy that was optimized, had an argmax form: h(x) = argmaxa?C(X) {f (x, a)}, with a crucial distinction from previous approaches in how f (x, a) was trained. Here f : X ? A ? [0, 1] is a regression function that is trained to estimate probability of click, and C(X) = {a ? A|? ? (a|x) > 0} is a set of feasible ads. The training samples were of the form (x, a, y), where y = 1 if the ad a was clicked after being shown on page x or y = 0 otherwise. The regressor f was chosen to approximately minimize (y?f (x,a))2 the weighted squared loss: max{? ? (at |xt ),? } . Stochastic gradient descent was used to minimize the squared loss on the training data. During the evaluation, we computed the estimator on the test data (xt , at , yt ): T 1 X yt I(h(xt ) = at ) h ? V?? = . T t=1 max{? ? (at |xt ), ? } (7) As mentioned in the introduction, this estimator is biased due to the use of the parameter ? > 0. As shown in the analysis of Section 3, this bias typically underestimates the true value of the policy h. We experimented with different thresholds ? and parameters of our learning algorithm.5 Results are summarized in the Table 2. The Interval column is computed using the relative entropy form of the Chernoff bound with ? = 0.05 which holds under the assumption that variables, in our case the samples used in the computation of the estimator (Equation 7), are IID. Note that this computation is slightly complicated because the range of the variables is [0, 1/? ] rather than [0, 1] as is typical. This is handled by rescaling by ? , applying the bound, and then rescaling the results by 1/? . 4 Technically the feature vector that the regressor uses is the Cartesian product of the page and ad vectors. For stochastic gradient descent, we varied the learning rate over 5 fixed numbers (0.2, 0.1, 0.05, 0.02, 0.01) using 1 pass over the data. We report on the test results for the value with the best training error. 5 7 The ?Random? policy is the policy that chooses randomly from the set of feasible ads: Random(x) = a ? UNIF(C(X)), where UNIF(?) denotes the uniform distribution. The ?Naive? policy corresponds to the theoretically flawed supervised learning approach detailed in the introduction. The evaluation of this policy is quite expensive, requiring one evaluation per ad per example, so the size of the test set is reduced to 8373 examples with a click, which reduces the significance of the results. We bias the results towards the naive policy by choosing the chronologically first events in the test set (i.e. the events most similar to those in the training set). Nevertheless, the naive policy receives 0 reward, which is significantly less than all other approaches. A possible fear with the evaluation here is that the naive policy is always finding good ads that simply weren?t explored. A quick check shows that this is not correct?the naive argmax simply makes implausible choices. Note that we report only evaluation against ? = 0.05, as the evaluation against ? = 0.01 is not significant, although the reward obviously remains 0. The ?Learned? policies do depend on ? . As suggested by Theorem 3.2, as ? is decreased, the effective set of hypotheses we compete with is increased, thus allowing for better performance of the learned policy. Indeed, the estimates for both the learned policy and the random policy improve when we decrease ? from 0.05 to 0.01. The empirical click-through rate on the test set was 0.0213, which is slightly larger than the estimate for the best learned policy. However, this number is not directly comparable since the estimator provides a lower bound on the true value of the policy due to the bias introduced by a nonzero ? and because any deployed policy chooses from only the set of ads which are available to display rather than the set of all ads which might have been displayable at other points in time. The empirical results are generally consistent with the theoretical approach outlined here?they provide a consistently pessimal estimate of policy value which nevertheless has sufficient dynamic range to distinguish learned policies from random policies, learned policies over larger spaces (smaller ? ) from smaller spaces (larger ? ), and the theoretically unsound naive approach from sounder approaches which choose amongst the the explored space of ads. It would be interesting future work to compare our approach to a full-fledged production online advertising system. 5 Conclusion We stated, justified, and evaluated theoretically and empirically the first method for solving the warm start problem for exploration from logged data with controlled bias and estimation. This problem is of obvious interest to applications for internet companies that recommend content (such as ads, search results, news stories, etc...) to users. However, we believe this also may be of interest for other application domains within machine learning. For example, in reinforcement learning, the standard approach to offline policy evaluation is based on importance weighted samples [3, 11]. The basic results stated here could be applied to RL settings, eliminating the need to know the probability of a chosen action explicitly, allowing an RL agent to learn from external observations of other agents. References [1] Peter Auer, Nicol`o C. Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48?77, 2002. [2] D. Horvitz and D. Thompson. A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47, 1952. [3] Michael Kearns, Yishay Mansour, and Andrew Y. Ng. Approximate planning in large pomdps via reusable trajectories. In NIPS, 2000. [4] Diane Lambert and Daryl Pregibon. More bang for their bucks: Assessing new features for online advertisers. In ADKDD 2007, 2007. [5] John Langford, Alexander L. Strehl, and Jenn Wortman. Exploration scavenging. In ICML-08: Proceedings of the 25rd international conference on Machine learning, 2008. [6] John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In Advances in Neural Information Processing Systems 20, pages 817?824, 2008. 8 [7] Robert A. Lew. Bounds on negative moments. SIAM Journal on Applied Mathematics, 30(4):728?731, 1976. [8] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the Nineteenth International Conference on World Wide Web (WWW-10), pages 661?670, 2010. [9] Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. Unbiased offline evaluation of contextualbandit-based news article recommendation algorithms. In Proceedings of the Fourth International Conference on Web Search and Web Data Mining (WSDM-11), 2011. [10] Art Owen and Yi Zhou. Safe and effective importance sampling. Journal of the American Statistical Association, 95:135?143, 1998. [11] Doina Precup, Rich Sutton, and Satinder Singh. Eligibility traces for off-policy policy evaluation. In ICML, 2000. 9
3977 |@word mild:1 eliminating:1 achievable:1 proportion:1 unif:5 seek:1 simulation:1 harder:1 moment:3 contains:5 score:4 selecting:1 horvitz:1 existing:2 current:2 com:3 contextual:12 clara:1 yet:1 chu:2 must:2 john:5 plot:1 implying:1 greedy:2 provides:3 contribute:1 simpler:1 evaluator:5 zhang:1 unbounded:1 become:1 prove:3 shorthand:1 consists:2 manner:1 theoretically:3 upenn:1 expected:10 indeed:1 ra:12 planning:1 multi:1 wsdm:1 relying:1 company:3 resolve:1 little:1 curse:1 armed:1 considering:2 clicked:3 provided:3 estimating:2 becomes:1 alto:1 maximizes:1 what:2 finding:1 nonrandom:2 guarantee:1 every:2 ti:2 exactly:3 demonstrates:1 control:1 internally:1 appear:3 overestimate:2 before:1 positive:3 limit:2 sutton:1 approximately:5 might:3 chose:3 suggests:2 someone:1 range:3 practice:2 regret:3 procedure:1 empirical:4 thought:1 significantly:1 word:3 argmaxa:2 clever:1 close:1 selection:1 context:4 applying:2 influence:2 restriction:1 conventional:2 deterministic:8 equivalent:1 yt:2 maximizing:1 quick:1 straightforward:3 www:1 thompson:1 announces:2 survey:3 estimator:19 population:1 stability:1 imagine:1 suppose:3 today:1 user:11 commercial:1 yishay:1 us:1 hypothesis:3 pa:1 trend:1 expensive:1 labeled:1 observed:1 module:2 wang:1 calculate:1 news:6 trade:1 highest:1 decrease:1 ran:2 substantial:1 mentioned:2 topmost:1 broken:1 complexity:2 reward:10 dynamic:2 trained:4 depend:2 solving:1 singh:1 technically:1 logging:15 learner:1 basis:2 completely:1 various:2 america:1 represented:1 forced:1 distinct:1 effective:2 lift:1 choosing:5 extraordinarily:1 outcome:1 quite:1 larger:6 solve:1 nineteenth:1 say:1 otherwise:2 statistic:1 think:1 highlighted:1 final:1 online:2 obviously:1 advantage:1 sequence:5 interaction:5 skakade:1 product:1 poorly:1 extending:1 assessing:1 perfect:1 executing:1 derive:1 andrew:1 involves:1 implies:3 concentrate:1 safe:2 correct:1 contextualbandit:1 stochastic:10 exploration:12 human:1 violates:1 require:3 generalization:4 weren:1 randomization:4 obliviously:1 hold:1 sufficiently:1 ground:3 great:1 scope:1 predict:1 mapping:1 claim:1 week:1 a2:1 purpose:4 estimation:9 applicable:1 palo:1 propensity:5 sensitive:1 create:1 weighted:4 always:4 rather:6 zhou:1 varying:3 overwhelmingly:1 derived:2 june:1 consistently:1 check:1 unrestrained:1 aka:1 contrast:1 ave:1 sense:2 detect:1 flaw:1 dependent:4 typically:2 entire:1 bandit:16 interested:1 issue:1 overall:2 denoted:1 yahoo:12 art:1 wharton:1 construct:1 never:1 ng:1 sampling:4 chernoff:1 identical:1 represents:1 flawed:1 icml:2 inevitable:1 future:2 report:3 recommend:1 fundamentally:1 randomly:2 unsound:1 individual:2 intended:1 argmax:7 replacement:1 interest:4 investigate:1 mining:1 evaluation:14 extreme:1 accurate:6 necessary:1 old:1 theoretical:4 instance:2 column:1 earlier:1 increased:1 yoav:1 addressing:1 subset:4 uniform:4 usefulness:1 predictor:3 wortman:1 comprised:1 inadequate:1 front:3 reported:1 randomizing:1 varies:1 synthetic:2 chooses:14 international:3 siam:2 probabilistic:1 off:2 regressor:5 pool:4 michael:1 precup:1 ctr:7 squared:4 ambiguity:1 recorded:5 unavoidable:1 choose:3 possibly:6 hoeffding:1 worse:1 external:1 creating:1 american:2 rescaling:2 pessimal:1 li:3 suggesting:1 de:1 summarized:1 inc:3 explicitly:2 doina:1 ad:24 break:1 picked:1 analyze:1 observing:2 traffic:1 start:5 recover:1 complicated:1 responder:1 minimize:3 accuracy:3 lew:1 variance:1 conducting:1 yield:1 correspond:2 generalize:1 lambert:2 accurately:1 emphasizes:1 iid:3 advertising:2 pomdps:1 trajectory:1 history:1 explain:1 implausible:1 facebook:3 definition:1 failure:1 underestimate:2 against:3 frequency:5 obvious:1 naturally:1 proof:3 dataset:13 knowledge:2 subsection:2 dimensionality:1 formalize:1 auer:1 appears:1 ok:1 supervised:3 day:3 wei:2 done:2 though:1 evaluated:3 furthermore:2 implicit:1 langford:5 hand:4 receives:1 web:6 scavenging:2 google:1 quality:1 believe:1 usa:2 effect:2 requiring:1 verify:2 unbiased:3 true:5 normalized:1 hence:1 contain:1 aggressively:1 nonzero:1 round:6 during:3 eligibility:1 essence:1 covering:1 prominent:1 occuring:1 tt:1 ridge:1 empirically:3 rl:2 jl:1 extend:1 occurred:1 discussed:1 million:3 association:2 significant:1 multiarmed:1 rd:1 outlined:1 mathematics:1 similarly:1 had:1 lihong:4 etc:1 inequality:2 binary:1 discussing:1 yi:1 scoring:1 additional:1 relaxed:1 floor:1 somewhat:1 surely:1 maximize:3 advertiser:2 period:3 signal:1 ii:1 relates:1 full:1 sound:2 sham:1 reduces:1 d0:4 long:1 equally:1 visit:3 a1:8 controlled:2 variant:1 regression:3 basic:2 expectation:5 justified:1 want:1 separately:1 interval:2 underestimated:1 decreased:1 argmaxh:1 source:2 crucial:2 biased:2 rest:1 call:1 revealed:1 split:1 variety:1 fit:1 timesteps:1 pennsylvania:1 nonstochastic:1 click:8 economic:1 idea:1 simplifies:1 whether:2 handled:1 suffer:1 peter:1 york:1 action:40 prefers:1 adequate:2 ignored:1 useful:1 generally:1 santa:1 clear:1 detailed:1 buck:1 tth:1 reduced:1 schapire:2 specifies:1 problematic:1 estimated:3 per:2 rb:2 nctr:2 visitation:1 reusable:1 threshold:4 nevertheless:2 drawn:2 wasteful:1 verified:2 utilize:1 timestep:1 chronologically:1 fraction:2 sum:3 compete:2 fourth:1 logged:7 reasonable:2 draw:4 decision:1 comparable:1 bound:15 uncontrolled:1 internet:2 guaranteed:1 simplification:1 display:6 distinguish:1 occur:2 personalized:1 extremely:2 relatively:1 department:1 according:4 poor:1 across:1 slightly:2 smaller:2 kakade:1 making:1 modification:2 census:1 taken:1 ln:1 equation:5 previously:1 remains:2 turn:3 loose:1 fail:2 know:2 available:1 apply:2 robustness:1 substitute:1 denotes:4 assumes:1 retiring:1 especially:1 prof:1 question:2 quantity:1 occurs:2 added:1 primary:1 rt:2 dependence:1 gradient:2 linucb:3 amongst:1 street:1 collected:1 modeled:1 ratio:1 setup:1 difficult:1 robert:3 statement:3 pregibon:2 expense:1 trace:1 stated:3 negative:2 policy:97 unknown:2 allowing:3 upper:2 bianchi:1 observation:2 datasets:1 finite:1 descent:2 displayed:4 situation:1 mansour:1 varied:1 arbitrary:2 introduced:1 namely:1 required:1 specified:1 optimized:1 california:1 engine:2 learned:9 distinction:1 protect:1 nip:1 able:1 suggested:1 below:1 challenge:1 max:7 explanation:1 exp4:1 event:26 daryl:1 critical:5 treated:2 warm:5 suitable:1 predicting:3 indicator:3 business:1 advanced:1 arm:4 improve:1 naive:8 philadelphia:1 prior:2 epoch:2 nicol:1 relative:2 freund:1 loss:9 interesting:2 triple:1 revenue:1 foundation:1 agent:2 gather:1 sufficient:1 consistent:4 article:16 editor:1 story:1 strehl:3 production:1 offline:14 bias:8 formal:1 understand:1 side:3 drastically:1 fledged:1 wide:1 taking:1 sparse:1 world:8 numeric:1 evaluating:2 doesn:1 rich:1 commonly:1 collection:1 made:1 reinforcement:1 historical:2 income:4 approximate:1 satinder:1 parkway:1 assumed:2 tuples:2 don:1 search:4 why:1 table:1 learn:2 reasonably:2 robust:2 ca:2 diane:1 necessarily:2 timespan:1 domain:1 significance:1 main:3 universe:1 rh:2 xuanhui:1 arise:1 repeated:1 west:1 deployed:1 ny:1 tong:1 position:1 deterministically:3 candidate:3 advertisement:2 down:1 theorem:8 xt:6 emphasized:1 er:1 explored:2 experimented:1 intrinsic:1 essential:2 naively:1 consist:1 effectively:3 importance:7 jenn:1 execution:1 conditioned:4 occurring:2 downward:1 cartesian:1 entropy:1 simply:2 partially:1 fear:1 recommendation:3 applies:1 corresponds:1 truth:3 satisfies:1 relies:1 conditional:1 sized:1 goal:1 month:1 consequently:1 bang:1 towards:1 twofold:1 owen:1 feasible:2 change:2 content:1 included:2 specifically:1 typical:1 uniformly:3 lemma:8 kearns:1 total:3 pas:1 experimental:2 e:1 indicating:1 formally:2 highdimensional:1 support:3 alexander:2 evaluate:2 reg:7 ex:4
3,287
3,978
Worst-case bounds on the quality of max-product fixed-points ? Cerquides Jesus Artificial Intelligence Research Institute (IIIA) Spanish Scientific Research Council (CSIC) Campus UAB, Bellaterra, Spain [email protected] Meritxell Vinyals Artificial Intelligence Research Institute (IIIA) Spanish Scientific Research Council (CSIC) Campus UAB, Bellaterra, Spain [email protected] Alessandro Farinelli Department of Computer Science University of Verona Strada le Grazie, 15,Verona, Italy [email protected] Juan Antonio Rodr??guez-Aguilar Artificial Intelligence Research Institute (IIIA) Spanish Scientific Research Council (CSIC) Campus UAB, Bellaterra, Spain [email protected] Abstract We study worst-case bounds on the quality of any fixed point assignment of the max-product algorithm for Markov Random Fields (MRF). We start providing a bound independent of the MRF structure and parameters. Afterwards, we show how this bound can be improved for MRFs with specific structures such as bipartite graphs or grids. Our results provide interesting insight into the behavior of max-product. For example, we prove that max-product provides very good results (at least 90% optimal) on MRFs with large variable-disjoint cycles1 . 1 Introduction Graphical models such as Markov Random Fields (MRFs) have been successfully applied to a wide variety of applications such as image understanding [1], error correcting codes [2], protein folding [3] and multi-agent systems coordination [4]. Many of these practical problems can be formulated as finding the maximum a posteriori (MAP) assignment, namely the most likely joint variable assignment in an MRF. The MAP problem is NP-hard [5], thus requiring approximate methods. Here we focus on a particular MAP approximate method: the (loopy) max-product belief propagation [6, 7]. Max-product?s popularity stems from its very good empirical performance on general MRFs [8, 9, 10, 11], but it comes with few theoretical guarantees. Concretely, max-product is known to be correct in acyclic and single-cycle MRFs [11], although convergence is only guaranteed in the acyclic case. Recently, some works have established that max-product is guarantee to return the optimal solution, if it converges, on MRFs corresponding to some specific problems, namely: (i) weighted b-matching problems [12, 13]; (ii) maximum weight independent set problems [14]; or (iii) problems whose equivalent nand Markov random field (NMRF) is a perfect graph [?]. For weighted b-matching problems with a bipartite structure Huang and Jebara [15] establish that max-product algorithm always converges to the optimal. Despite these guarantees provided in these particular cases, for arbitrary MRFs little is known on the quality of the max-product fixed-point assignments. To the best of our knowledge, the only result in this line is the work of Wainwright et al. [16] where, given any arbitrary MRF, authors derive an upper bound on the absolute error of the max-product fixed-point assignment. This bound 1 MRFs in which all cycles are variable-disjoint, namely that they do not share any edge and in which each cycle contains at least 20 variables. 1 is calculated after running the max-sum algorithm and depends on the particular MRF (structure and parameters) and therefore provide no guarantees on the quality of max-product assignments on arbitrary MRFs with cycles. In this paper we provide quality guarantees for max-product fixed-points in general settings that can be calculated prior to the execution of the algorithm. To this end, we define worst-case bounds on the quality of any max-product fixed-point for any MRF, independently of its structure and parameters. Furthermore, we show how tighter guarantees can be obtained for MRFs with specific structures. For example, we prove that in 2-D grids max-product fixed points assignments have at least 33% of the quality of the optimum; and that for MRFs with large variable-disjoint cycles1 they have at least 90% of the quality of the optimum. These results shed some light on the relationship between the quality of max-product assignments and the structure of MRFs. Our results build upon two main components: (i) the characterization of any fixed-point max-product assignment as a neighbourhood maximum in a specific region of the MRF [17]; and (ii) the worstcase bounds on the quality of a neighbourhood maximum obtained in the K-optimality framework [18, 19]. We combine these two results by: (i) generalising the worst-case bounds in [18, 19] to consider any arbitrary region; and (ii) assessing worst-case bounds for the specific region presented in [17] (for which any fixed-point max-product assignment is known to be maximal). 2 Overview 2.1 The max-sum algorithm in Pairwise Markov Random Fields A discrete pairwise Markov Random Field (MRF) is an undirected graphical model where each interaction is specified by a discrete potential function, defined on a single or a pair of variables. The structure of an MRF defines a graph G = hV, Ei, in which the nodes V represent discrete variables, and edges E represent interactions between nodes. Then, an MRF contains a unary potential function ?s for each node s ? V and a pairwise potential function ?st for each edge (s, t) ? E; the joint probability distribution of the MRF assumes the following form: ? ? Y X X 1 Y 1 1 p(x) = ?s (xs ) ?s (xs ) + ?st (xs , xt ) = exp ? ?st (xs , xt )? = exp (?(x)), (1) Z s?V Z Z s?V (s,t)?E (s,t)?E where Z is a normalization constant and ?s (xs ), ?st (xs , xt ) stand for the logarithm of ?s (xs ), ?(xs , xt ) which are well-defined if ?s (xs ), ?(xs , xt ) are strictly positive. Within this setting, the classical problem of maximum a posteriori (MAP) estimation corresponds to finding the most likely configuration under distribution p(x) in equation 1. In more formal terms, the MAP configuration x? = {x?s |s ? V } is given by: ? ? 4 x = arg max ? x?X N ? ? Y ?s (xs ) s?V Y 4 ?st (xs , xt )? = arg max ? x?X N (s,t)?E ? X s?V ?s (xs ) + X ?st (xs , xt )? , (2) (s,t)?E where X N is the Cartesian product space in which x = {xs |s ? V } takes values. Note that the MAP configuration may not be unique, that is, there may be multiple configurations, that attain the maximum in equation 1. In this work we assume that: (i) there is a unique MAP assignment (as assumed in [17]); and (ii) all potentials ?s and ?st are non-negative. The max-product algorithm is an iterative, local, message-passing algorithm for finding the MAP assignment in a discrete MRF as specified by equation 2. The max-sum algorithm is the correspondent of the max-product algorithm when we consider the log-likelihood domain. The standard update rules for max-sum algorithm are: ? ? mij (xj ) = ?ij + max ??i (xi ) + ?ij (xi , xj ) + xi X k?N (i)\j mki (xi )? bi (xi ) = ?i (xi ) + P mki (xi ) k?N (i) where ?ij is a normalization constant and N (i) is the set of indices for variables that are connected to xi . Here mij (xj ) represents the message that variable xi sends to variable xj . At the first iteration all messages are initialised to constant functions. At each following iteration, each variable xi aggregates all incoming messages and computes the belief bi (xi ), which is then used to obtain the maxS sum assignment xM S . Specifically, for every variable xi ? V we have xM = arg maxxi bi (xi ). i 2 x0 x2 (a) x1 x0 x3 x2 (b) x1 x0 x3 x2 (c) x1 x0 x3 x2 x1 x0 x3 x2 (d) x1 x3 (e) Figure 1: (a) 4-complete graph and (b)-(e) sets of variables covered by the SLT-region. The convergence of the max-sum is usually characterized considering fixed points for the message update rules, i.e. when all the messages exchanged are equal to the last iteration. Now, the max-sum algorithm is known to be correct over acyclic and single-cycle graphs. Unfortunately, on general graphs the aggregation of messages flowing into each variable only represents an approximate solution to the maximization problem. Nonetheless, it is possible to characterise the solution obtained by max-sum as we discuss below. 2.2 Neighborhood maximum characterisation of max-sum fixed points In [17], Weiss et al. characterize how well max-sum approximates the MAP assignment. In particular, they find the conditions for a fixed-point max-sum assignment xM S to be neighbourhood maximum, namely greater than all other assignments in a specific large region around xM S . Notice that characterising an assignment as neighbourhood maximum is weaker than a global maximum, but stronger than a local maximum. Weiss et al. introduce the notion of Single Loops and Trees (SLT) region to characterise the assignments in such region. Definition 1 (SLT region). An SLT-region of x in G includes all assignments x0 that can be obtained from x by: (i) choosing an arbitrary subset S ? V such that its vertex-induced subgraph contains at most one cycle per connected component; (ii) assigning arbitrary values to the variables in S while keeping the assignment to the other variables as in x. Hence, we say that an assignment xSLT is SLT-optimal if it is greater than any other assignment in its SLT region. Finally, the main result in [17] is the characterisation of any max-sum fixed-point assignments as an SLT-optimum. Figures 1(b)-(e) illustrate examples of assignments in the SLTregion in the complete graph of figure 1(a), here boldfaced nodes stand for variables that vary the assignment with respect to xSLT . 3 Generalizing size and distance optimal bounds In [18], Pearce et al. introduced worst-case bounds on the quality of a neighbourhood maximum in a region characterized by its size. Similary, Kiekintveld et al. introduced in [19] analogous worst-case bounds but using as a criterion the distance in the graph. In this section we generalize these bounds to use them for any neighbourhood maximum in a region characterized by arbitrary criteria. Concretely we show that our generalization can be used for bounding the quality of max-sum assignments. 3.1 C-optimal bounds Hereafter we propose a general notion of region optimality, the so-called C-optimality, and describe how to calculate bounds for a C-optimal assignment, namely an assignment that is neighbourhood maximum in a region characterized by an arbitrary C criteria. The concept of C-optimality requires the introduction of several concepts. Given A, B ? V we say that B completely covers A if A ? B. We say that B does not cover A at all if A ? B = ?. Otherwise, we say that B covers A partially. A region C ? P(V ) is a set composed by subsets of V . We say that A ? V is covered by C if there is a C ? ? C such that C ? completely covers A. Given two assignments xA and xB , we define D(xA , xB ) as the set containing the variables whose values in xA and xB differ. An assignment is C-optimal if it cannot be improved by changing the values in any group of variables covered by C. That is, an assignment xA is C-optimal if for every assignment xB s.t. D(xA , xB ) is covered by C we have that ?(xA ) ? ?(xB ). For any S ? E we define cc(S, C) = |{C ? ? C s.t S ? C ? }|, that is, the number of elements in C that cover S completely. We also define nc(S, C) = |{C ? ? C s.t S ? C ? = ?}|, that is, the number of elements in C that do not cover S at all. 3 Proposition 1. Let G = hV, Ei be a graphical model and C a region. If xC is a C-optimum then ?(xC ) ? cc? ?(x? ) |C| ? nc? (3) where cc? = minS?E cc(S, C), nc? = minS?E nc(S, C), and x? is the MAP assignment. Proof. The proof is a generalization of the one in [20] for k-optimality. For every C ? ? C, consider C ? ? ? ? C an assignment x? such that x? i = xi if xi 6? C and xi = xi if xi ? C . Since x is C-optimal, ? C ? for all C ? C, ?(x ) ? ?(x ) holds, and hence: ! X C ?(x ) ? ? ?(x ) /|C|. (4) C ? ?C Notice that although ?(x? ) is defined as the sum of unary potentials and pairwise potentials values we can always get rid of unary potentials by combining them into pairwise potentials P without changing the structure of the MRF. In so doing, for each x? , we have that ?(x? ) = S?E ?S (x? ). We ? classify each edge S ? E into one of three disjoint groups, depending on whether P C covers S com? ? ? ? pletely (T (C )), partially (P (C )), or not at all (N (C )), so that ?(x ) = S?T (C ? ) ?S (x? ) + P P ? ? remove the partially covered potentials at the S?P (C ? ) ?S (x ) + S?N (C ? ) ?S (x ). We can P P cost of obtaining a looser bound. Hence ?(x? ) ? S?T (C ? ) ?S (x? ) + S?N (C ? ) ?S (x? ). Now, by definition of x? , for every variable xi in a potential completely covered by C ? we have that ? C x? every variable xi in P a potential not covered at all by C ? we have that x? i = xi , and forP i = xi . ? ? C Hence, ?(x ) ? S?T (C ? ) ?S (x ) + S?N (C ? ) ?S (x ). To assess a bound, after substituting this inequality in equation 4, we have that: P C ?(x ) ? P C ? ?C S?T (C ? ) ?S (x? ) + P C ? ?C P S?N (C ? ) |C| ?S (xC ) . (5) We need to express the numerator in terms of ?(xC ) and ?(x? ). Here is where the previously defined sets cc(S, C) and nc(S, C) come into play. Grouping the sum by potentials and recall that cc? = minS?E cc(S, C), the term on the left can be expressed as: X X ?S (x? ) = C ? ?C S?T (C ? ) X cc(S, C) ? ?S (x? ) ? S?E X cc? ? ?S (x? ) = cc? ? ?(x? ). S?E Furthermore, recall that nc? = minS?E nc(S, C), we can do the same with the right term: X X C ? ?C S?N (C ? ) ?S (xC ) = X nc(S, C) ? ?S (xC ) ? S?E X nc? ? ?S (xC ) = nc? ? ?(xC ). S?E After substituting these two results in equation 5 and rearranging terms, we obtain equation 3. 3.2 Size-optimal bounds as a specific case of C-optimal bounds Now we present the main result in [18] as a specific case of C-optimality. An assignment is k-sizeoptimal if it can not be improved by changing the value of any group of size k or fewer variables. Proposition 2. For any MRF and for any k-optimal assignment xk : ?(xk ) ? (k ? 1) ?(x? ) (2|V | ? k ? 1) (6) Proof. This result is just a specific case of our general result where we take as a region all subsets  of |V | ? ? size k, that is C = {C ? V | |C | = k}. The number of elements in the region is |C| = k . The  |?2 number of elements in C that completely cover S is cc(S, C) = |Vk?2 (take the two variables in S plus k ? 2 variables out of the remaining |V | ? 2). The number of elements in C that do not cover  S at all is nc(S, C) = |V k|?2 (take k variables out of the remaining |V | ? 2 variables). Finally, we obtain equation 6 by using |V |, cc? and nc? in equation 3, and simplifying. 4 100 60 40 20 00 20 40 60 Number of variables 80 ? MS 80 Percent optimal ( ??((xx )) ?100 ) 2D grid Bipartite Complete/Structure-independent ? MS Percent optimal ( ??((xx )) ?100 ) 100 90 80 70 60 40 303 100 (a) Bounds on complete, bipartite and 2-D structures when varying the number of variables. d=2 d=4 d=8 d=128 d=1024 50 10 20 30 40 50 Minimum number of variables in each cycle (b) Bounds on MRFs with variable-disjoint cycles when varying the number of cycles and their size. Figure 2: Percent optimal bounds for max-sum fixed point assignments in specific MRF structures. 4 Quality guarantees on max-sum fixed-point assignments In this section we define quality guarantees for max-sum fixed-point assignments in MRFs with arbitary and specific structures. Our quality guarantees prove that the value of any max-sum fixedpoint assignments can not be less than a fraction of the optimum. The main idea is that by virtue of the characterization of any max-sum fixed point assignment as SLT-optimal, we can select any region C composed of a combination of single cycles and trees of our graph and use it for computing its corresponding C-optimal bound by means of proposition 1. We start by proving that bounds for a given graph apply to its subgraphs. Then, we find that the bound for the complete graph applies to any MRF independently of its structure and parameters. Afterwards we provide tighter bounds for MRFs with specific structures. 4.1 C-optimal bounds based on the SLT region In this section we show that C-optimal bounds based on SLT-optimality for a given graph can be applied to any of its subgraphs. Proposition 3. Let G = hV, Ei be a graphical model and C the SLT-region of G. Let G 0 = hV 0 , E 0 i be a subgraph of G. Then the bound of equation 3 for G holds for any SLT-optimal assignment in G 0 . Sketch of the proof. We can compose a region C 0 containing the same elements as C but removing those variables which are not contained in V 0 . Note that SLT-optimality on G 0 guarantees optimality in each element of C 0 . Observe that the bound obtained by applying equation 3 to C 0 is greater or equal than the bound obtained for C. Hence, the bound for G applies also to G 0 . A direct conclusion of proposition 3 is that any bound based on the SLT-region of a complete graph of n variables can be directly applied to any subgraph of n or fewer variables regardless of its structure. In what follows we assess the bound for a complete graph. Proposition 4. Let G = hV, Ei be a complete MRF. For any max-sum fixed point assignment xM S , 1 ? ?(x? ). (7) ?(xM S ) ? |V | ? 2 Proof. Let C be a region containing every possible combination of three variables in V . Every set of three variables is part of the SLT-region because it can contain at most one cycle. The development in the proof of proposition 2 can be applied here for k = 3 to obtain equation 7. Corollary 5. For any MRF, any max-sum fixed point assignment xM S satisfies equation 7. Since any graph can be seen as a subgraph of the complete graph with the same number of variables, the corollary is straightforward given propositions 3 and 4. Figure 2(a) plots this structureindependent bound when varying the number of variables. Observe that it rapidly decreases with 5 x0 x3 x1 x4 x0 x3 x0 x3 x0 x3 x0 x3 x0 x3 x0 x3 x1 x4 x1 x4 x1 x4 x1 x4 x1 x4 x1 x4 x1 x5 x2 x5 x2 x5 x2 x5 x2 x5 x2 x5 x2 x2 x2 x5 (a) (b) (c) (d) (e) (g) (f) (h) x0 x3 x0 x3 x0 x3 x0 x3 x0 x3 x0 x3 x1 x4 x1 x4 x1 x4 x1 x4 x1 x4 x1 x4 x2 x5 x2 x5 x2 x5 x2 x5 x2 x5 x2 (k) (l) (m) (n) (o) x0 x3 x0 x3 x0 x3 x4 x1 x4 x1 x4 x5 x2 x5 x2 (i) x5 (j) x5 (p) Figure 3: Example of (a) a 3-3 bipartite graph and (b)-(p) sets of variables covered by the SLT-region. the number of variables and it is only significant on very small MRFs. In the next section, we show how to exploit the knowledge of the structure of an MRF to improve the bound?s significance. 4.2 SLT-bounds for specific MRF structures and independent of the MRF parameters In this section we show that for MRFs with specific structures, it is possible to provide bounds much tighter than the structure-independent bound provided by corollary 5. These structures include, but are not limited to, bipartite graphs, 2-D grids, and variable-disjoint cycle graphs. 4.2.1 Bipartite graphs In this section we define the C-optimal bound of equation 3 for any max-sum fixed point assignment in an n-m bipartite MRF. An n-m bipartite MRF is a graph whose vertices can be divided into two disjoint sets, one with n variables and another one with m variables, such that the n variables in the first set are connected to the m variables in the second set. Figure 3(a) depicts a 3-3 bipartite MRF. Proposition 6. For any MRF with n-m bipartite structure where m ? n, and for any max-sum fixed point assignment xM S we have that: ( 1 m?n+3 ?(xM S ) ? b(n, m) ? ?(x? ) b(n, m) = n 2 (8) m<n+3 n+m?2 Proof. Let C A be a region including one out of the n variables and all of the m variables (in figure 3, elements (n)-(p)). Since the elements of this region are trees, we can guarantee optimality on them. The number of elements of the region is |C A | = n. It is clear that each edge in the graph is completely covered by one of the elements of C A , and hence cc? = 1. Furthermore, every edge is partially covered, since all of the m variables are present in every element, and hence nc? = 0. Applying equation 3 gives the bound 1/n. Alternatively, we can define a region C B formed by taking sets of four variables, two from each set. Since the elements of C B are single-cycle graphs (in figure 3, elements (b)-(j)), we can guarantee 2 2 . Observe that n+m?2 > optimality on them. Applying proposition 1, we obtain the bound n+m?2 1 n when m < n + 3, and so equation 8 holds (details can be found in the additional material). Example 1. Consider the 3-3 bipartite MRF of figure 3(a). Figures 3(b)-(j) show the elements in the region C B composed of sets of four variables, two from each side. Therefore |C B | is 9. Then, for any edge S ? E there are 4 sets in C B that contain its two variables. For example, the edge that links the upper left variable (x0 ) and the upper right variable (x3 ) is included in the subgraphs of figures 3(b), (c), (e) and (f). Moreover, for any edge S ? E there is a single element in C B that does not cover it at all. For example, the only graph that does not include neither x0 nor x3 is the graph of figure 3(j). Thus, the bound is 4/(9 ? 1) = 1/2. Figure 2(a) plots the bound of equation 8 for bipartite graphs when varying the number of variables. Note that although, also in this case, the value of the bound rapidly decreases with the number of variables, it is two times the values of the structure-independent bound (see equation 7). 4.2.2 Two-dimensional (2-D) grids In this section we define the C-optimal bound of equation 3 for any max-sum fixed point assignment in a two-dimensional grid MRF. An n-grid structure stands for a graph with n rows and n columns where each variable has 4 neighbours. Figure 4 (a) depicts a 4-grid MRF. 6 x0 x1 x2 x3 x0 x1 x2 x3 x0 x1 x2 x3 x0 x1 x2 x3 x0 x1 x2 x3 x4 x5 x6 x7 x4 x5 x6 x7 x4 x5 x6 x7 x4 x5 x6 x7 x4 x5 x6 x7 x8 x9 x10 x11 x8 x9 x10 x11 x8 x9 x10 x11 x8 x9 x10 x11 x8 x9 x10 x11 x12 x13 x14 x15 x12 x13 x14 x15 x12 x13 x14 x15 x12 x13 x14 x15 x12 x13 x14 x15 (a) (b) (c) (d) (e) Figure 4: Example of (a) a 4-grid graph and (b)-(e) sets of variables covered by the SLT-region. Proposition 7. For any MRF with an n grid structure where n is an even number, for any max-sum fixed point assignment xM S we have that n ?(xM S ) ? ? ?(x? ) (9) 3n ? 4 Proof. We can partition columns in pairs joining column 1 with column (n/2) + 1, column 2 with column (n/2) + 2 and so on. We can partition rows in the same way. Let C be a region where each element contains the vertices in a pair of rows at distance n2 together with those in a pair of columns at distance n2 . Note that optimality is guaranteed in each C ? ? C because variables in two non-consecutive rows and two non-consecutive columns create a single-cycle graph. Since we take every possible combination, |C| = ( n2 )2 . Each edge is completely covered by n2 elements and hence cc? = n2 . Finally2 , for each edge S, there are nc? = ( n2 ? 1)( n2 ? 2) elements of C that do not cover S at all. Substituting these values into equation 3 leads to equation 9. Example 2. Consider the 4-grid MRF of figure 4 (a). Figures 4 (b)-(e) show the vertex-induced subgraphs for each set of vertices in the region C formed by the combination of any pairs of rows in {(1, 3), (2, 4)} and pair of columns in {(1, 3), (2, 4)}. Therefore |C| = 4. Then, for any edge S ? E there are 2 sets that contain its two variables. For example, the edge that links the two first variables in the first row, namely x0 and x1 , is included in the subgraphs of figures (a) and (b). Moreover, for any edge S ? E there is no set that contains no variable from S. Thus, the bound is 1/2. Figure 2(a) plots the bound for 2-D grids when varying the number of variables. Note that when compared with the bound for complete and bipartite structures, the bound for 2-D grids decreases smoothly and tends to stabilize as the number of variables increases. In fact, observe that by equation 9, the bound for 2-D grids is never less that 1/3 independently of the grid size. 4.2.3 MRFs that are a union of variable-disjoint cycles In this section we assess a bound for MRFs composed of a set of variable-disjoint cycles, namely of cycles that do not share any variable. A common pattern shared by the bounds assessed so far is that they decrease as the number of variables of an MRF grows. This section provides an example showing that there are specific structures for which C-optimality obtains significant bounds for large MRFs. Example 3. Consider the MRF composed of two variable-disjoint cycles of size 4 depicted in figure 5(a). To create the region, we remove each of the variables of the first cycle, one at a time (see figures 5(b)-(e)). We act analogously with the second cycle. Hence, C is composed of 8 elements. Just by counting we observe that each edge is completely covered 6 times, so cc? = 6. Since we are removing a single variable at a time, nc? = 0. Hence, the bound for a max-sum fixed point in this MRF structure is 6/8 = 3/4. The following result generalizes the previous example to MRFs containing d variable-disjoint cycles of size larger or equal to l. Proposition 8. For any MRF such that every pair of cycles is variable-disjoint and where there are at most d cycles of size l or larger, and for any max-sum fixed point assignment xM S , we have that:   2(d ? 1) (l ? 2) ? d + 2 MS ?(x ) ? 1 ? ? ?(x? ) = ? ?(x? ). (10) d?l l?d 2 Details can be found in the additional material 7 (c) x1 x3 x5 x7 x1 x3 x5 x7 x0 x2 x4 x6 x0 x2 x4 x6 x5 x3 x2 x7 x1 x0 x4 x7 x6 (b) x6 x3 x5 x2 x1 x0 x4 x5 x7 x4 x3 x2 x6 x1 x0 (a) (d) (e) Figure 5: (a) 2 variable-disjoint cycles MRF of size 4 and (b-e) sets of variables covered by the SLT-region. The proof generalizes the region explained in example 3 to any variable-disjoint cycle MRF by defining a region that includes an element for every possible edge removal from every cycle but one. The proof is omitted here due to lack of space but can be consulted in the additional material. Equation 10 shows that the bound: (i) decreases with the number of cycles; and (ii) increases as the maximum number of variables in each cycle grows. Figure 2(b) illustrates the relationship between the bound, the number of cycles (d), and the maximum size of the cycles (l). The first thing we observe is that the size of the cycles has more impact on the bound than the number of cycles. In fact, observe that by equation 10, the bound for a variable-disjoint cycle graph with a maximum cycle size of l is at least (l?2) l , independently of the number of cycles. Thus, if the minimum size of a cycle is 20, the quality for a fixed point is guaranteed to be at least 90%. Hence, quality guarantees for max-sum fixed points are good whenever: (i) the cycles in the MRF do not share any variables; and (ii) the smallest cycle in the MRF is large. Therefore, our result confirms and refines the recent results obtained for single-cycle MRFs [11]. 4.3 SLT-bounds for arbitrary MRF structures and independent of the MRF parameters In this section we discuss how to assess tight SLT-bounds for any arbitrary MRF structure. Similarly to [18, 20], we can use linear fractional programming (LFP) to compute the structure specific SLT bounds in any MRF with arbitrary structure. Let C be a region for all subsets in the SLT region of the graphical model G = hV, Ei of an MRF. For each S ? E, the LFP contains two LFP variables that represents the value of the edge S for thePSLT-optimum, xM S , and for the MAP assignment, x? . The objective of the LFP is to minimize ?S (xM S ) ? S?E ?S (x ) ? S?E P such that for all C ? ? C, ?(xM S ) ? ?(x? ) ? 0. Following [18, 20], for each C ? ? C, ?(x ) can be expressed in terms of the value of the potentials for xM S and x? . Then, the optimal value of this LFP is a tight bound for any MRF with the given specific structure. Indeed, the solution of the LFP provides the values of potentials for xM S and x? that produce the worst-case MRF whose SLT-optimum has the lowest value with respect to the optimum. However, because this method requires to list all the sets in the SLT-region, the complexity of generating an LFP increases exponentially with the number of variables in the MRF. Therefore, although this method provides more flexibility to deal with any arbitrary structure, its computational cost does not scale with the size of MRFs in contrast with the structure specific SLT-bounds of section 4.2, that are assessed in constant time. 5 Conclusions We provided worst-case bounds on the quality of any max-product fixed point. With this aim, we have introduced C-optimality, which has proven a valuable tool to bound the quality of max-product fixed points. Concretely, we have proven that independently of an MRF structure, max-product has a quality guarantee that decreases with the number of variables of the MRF. Furthermore, our results allow to identify new classes of MRF structures, besides acyclic and single-cycle, for which we can provide theoretical guarantees on the quality of max-product assignments. As an example, we defined significant bounds for 2-D grids and MRFs with variable-disjoint cycles. Acknowledgments Work funded by projects EVE (TIN2009-14702-C02-01,TIN2009-14702-C02-02), AT(CONSOLIDER CSD2007-0022), and Generalitat de Catalunya (2009-SGR-1434). Vinyals is supported by the Ministry of Education of Spain (FPU grant AP2006-04636). 8 References [1] Marshall F. Tappen and William T. Freeman. Comparison of graph cuts with belief propagation for stereo, using identical mrf parameters. In In ICCV, pages 900?907, 2003. [2] Jon Feldman, Martin J. Wainwright, and David R. Karger. Using linear programming to decode binary linear codes. IEEE Transactions on Information Theory, 51(3):954?972, 2005. [3] Chen Yanover and Yair Weiss. Approximate inference and protein-folding. In Advances in Neural Information Processing Systems, pages 84?86. MIT Press, 2002. [4] Alessandro Farinelli, Alex Rogers, Adrian Petcu, and Nicholas R. Jennings. Decentralised coordination of low-power embedded devices using the max-sum algorithm. In AAMAS, pages 639?646, 2008. [5] Solomon Eyal Shimony. Finding MAPs for belief networks is NP-Hard. Artif. Intell., 68(2):399?410, 1994. [6] Judea Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1988. [7] Srinivas M. Aji and Robert J. McEliece. The generalized distributive law. IEEE Transactions on Information Theory, 46(2):325?343, 2000. [8] Srinivas Aji, Gavin Horn, Robert Mceliece, and Meina Xu. Iterative min-sum decoding of tail-biting codes. In In Proc. IEEE Information Theory Workshop, pages 68?69, 1998. [9] Brendan J. Frey, Ralf Koetter, G. David Forney Jr., Frank R. Kschischang, Robert J. McEliece, and Daniel A. Spielman. Introduction to the special issue on codes on graphs and iterative algorithms. IEEE Transactions on Information Theory, 47(2):493?497, 2001. [10] Brendan J. Frey, Ralf Koetter, and Nemanja Petrovic. Very loopy belief propagation for unwrapping phase images. In NIPS, pages 737?743, 2001. [11] Yair Weiss. Correctness of local probability propagation in graphical models with loops. Neural Computation, 12(1):1?41, 2000. [12] Mohsen Bayati, Christian Borgs, Jennifer T. Chayes, and Riccardo Zecchina. Belief-propagation for weighted b-matchings on arbitrary graphs and its relation to linear programs with integer solutions. CoRR, abs/0709.1190, 2007. [13] Sujay Sanghavi, Dmitry Malioutov, and Alan Willsky. Linear programming analysis of loopy belief propagation for weighted matching. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1273?1280. MIT Press, Cambridge, MA, 2008. [14] Sujay Sanghavi, Devavrat Shah, and Alan S. Willsky. Message-passing for maximum weight independent set. CoRR, abs/0807.5091, 2008. [15] Bert Huang and Tony Jebara. Loopy belief propagation for bipartite maximum weight b-matching. In Marina Meila and Xiaotong Shen, editors, In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, March 2007. [16] Martin J. Wainwright, Tommi Jaakkola, and Alan S. Willsky. Tree consistency and bounds on the performance of the max-product algorithm and its generalizations. Statistics and Computing, 14(2):143?166, 2004. [17] Yair Weiss and William T. Freeman. On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs. IEEE Transactions on Information Theory, 47(2):736?744, 2001. [18] Jonathan P. Pearce and Milind Tambe. Quality guarantees on k-optimal solutions for distributed constraint optimization problems. In IJCAI, pages 1446?1451, 2007. [19] Christopher Kiekintveld, Zhengyu Yin, Atul Kumar, and Milind Tambe. Asynchronous algorithms for approximate distributed constraint optimization with quality bounds. In AAMAS, pages 133?140, 2010. [20] J. P. Pearce. Local Optimization in Cooperative Agent Networks. PhD thesis, University of Southern California, Los Angeles, CA, August 2007. 9
3978 |@word stronger:1 verona:2 consolider:1 adrian:1 confirms:1 atul:1 simplifying:1 configuration:4 contains:6 hereafter:1 karger:1 daniel:1 com:1 assigning:1 guez:1 refines:1 partition:2 koetter:2 christian:1 remove:2 plot:3 update:2 intelligence:4 fewer:2 device:1 xk:2 provides:4 characterization:2 node:4 direct:1 prove:3 combine:1 boldfaced:1 compose:1 eleventh:1 introduce:1 x0:35 pairwise:5 indeed:1 behavior:1 nor:1 multi:1 freeman:2 little:1 considering:1 provided:3 spain:4 campus:3 xx:2 moreover:2 project:1 lowest:1 what:1 finding:4 guarantee:16 zecchina:1 every:13 act:1 shed:1 platt:1 grant:1 positive:1 local:4 frey:2 tends:1 despite:1 joining:1 plus:1 limited:1 tambe:2 bi:3 practical:1 unique:2 acknowledgment:1 horn:1 lfp:7 union:1 x3:33 aji:2 empirical:1 attain:1 matching:4 protein:2 get:1 cannot:1 applying:3 equivalent:1 map:12 straightforward:1 regardless:1 independently:5 shen:1 correcting:1 subgraphs:5 insight:1 rule:2 aguilar:1 ralf:2 proving:1 x14:5 notion:2 analogous:1 play:1 decode:1 programming:3 element:21 tappen:1 cut:1 cooperative:1 hv:6 worst:9 calculate:1 region:41 cycle:41 connected:3 decrease:6 valuable:1 alessandro:3 complexity:1 tight:2 mohsen:1 upon:1 bipartite:15 completely:8 decentralised:1 matchings:1 joint:2 describe:1 artificial:4 aggregate:1 neighborhood:1 choosing:1 whose:4 larger:2 say:5 univr:1 otherwise:1 statistic:2 chayes:1 propose:1 interaction:2 product:27 maximal:1 biting:1 loop:2 combining:1 rapidly:2 subgraph:4 flexibility:1 roweis:1 los:1 correspondent:1 convergence:2 ijcai:1 optimum:8 assessing:1 produce:1 generating:1 perfect:1 converges:2 derive:1 illustrate:1 depending:1 ij:3 come:2 differ:1 tommi:1 correct:2 material:3 rogers:1 education:1 generalization:3 mki:2 tighter:3 proposition:12 strictly:1 hold:3 around:1 gavin:1 exp:2 substituting:3 vary:1 consecutive:2 smallest:1 omitted:1 estimation:1 proc:1 coordination:2 council:3 correctness:1 create:2 successfully:1 tool:1 weighted:4 mit:2 always:2 aim:1 varying:5 jaakkola:1 corollary:3 focus:1 vk:1 likelihood:1 contrast:1 brendan:2 sgr:1 posteriori:2 inference:1 mrfs:24 unary:3 nand:1 relation:1 koller:1 rodr:1 arg:3 x11:5 issue:1 development:1 special:1 field:5 equal:3 never:1 x4:26 represents:3 identical:1 jon:1 np:2 sanghavi:2 intelligent:1 few:1 neighbour:1 composed:6 intell:1 phase:1 william:2 ab:2 message:8 light:1 xb:6 edge:17 tree:4 logarithm:1 exchanged:1 forp:1 theoretical:2 classify:1 column:9 marshall:1 cover:11 shimony:1 assignment:51 maximization:1 loopy:4 cost:2 vertex:5 subset:4 characterize:1 petrovic:1 st:7 international:1 probabilistic:1 decoding:1 together:1 analogously:1 milind:2 thesis:1 x9:5 solomon:1 containing:4 huang:2 juan:1 kiekintveld:2 return:1 potential:14 de:1 stabilize:1 includes:2 inc:1 depends:1 eyal:1 doing:1 start:2 aggregation:1 ass:4 formed:2 minimize:1 kaufmann:1 identify:1 generalize:1 cc:15 similary:1 malioutov:1 whenever:1 definition:2 nonetheless:1 initialised:1 proof:10 judea:1 recall:2 knowledge:2 x13:5 fractional:1 x6:10 flowing:1 improved:3 wei:5 furthermore:4 xa:6 just:2 mceliece:3 sketch:1 ei:5 christopher:1 lack:1 propagation:8 farinelli:3 defines:1 quality:23 scientific:3 grows:2 artif:1 usa:1 requiring:1 concept:2 contain:3 hence:11 deal:1 x5:26 numerator:1 spanish:3 criterion:3 generalized:1 m:3 complete:10 characterising:1 percent:3 reasoning:1 image:2 recently:1 common:1 overview:1 exponentially:1 tail:1 approximates:1 significant:3 cambridge:1 feldman:1 sujay:2 grid:16 meila:1 similarly:1 consistency:1 csic:6 funded:1 bellaterra:3 recent:1 italy:1 inequality:1 binary:1 uab:3 seen:1 minimum:2 greater:3 additional:3 ministry:1 morgan:1 catalunya:1 ii:7 afterwards:2 multiple:1 stem:1 x10:5 alan:3 characterized:4 divided:1 marina:1 tin2009:2 impact:1 mrf:50 iteration:3 represent:2 normalization:2 folding:2 sends:1 publisher:1 induced:2 undirected:1 thing:1 integer:1 eve:1 counting:1 iii:1 variety:1 xj:4 idea:1 angeles:1 whether:1 stereo:1 passing:2 antonio:1 jennings:1 covered:14 clear:1 characterise:2 notice:2 disjoint:16 popularity:1 per:1 discrete:4 express:1 group:3 four:2 characterisation:2 changing:3 neither:1 graph:34 fraction:1 sum:31 c02:2 looser:1 forney:1 bound:71 guaranteed:3 constraint:2 alex:1 x2:31 x7:10 optimality:15 min:5 xiaotong:1 kumar:1 x12:5 martin:2 department:1 combination:4 march:1 jr:1 explained:1 iccv:1 equation:23 previously:1 jennifer:1 discus:2 devavrat:1 singer:1 end:1 generalizes:2 apply:1 observe:7 nicholas:1 neighbourhood:7 yair:3 shah:1 assumes:1 running:1 remaining:2 include:2 tony:1 graphical:6 xc:8 exploit:1 build:1 establish:1 classical:1 objective:1 southern:1 distance:4 link:2 distributive:1 willsky:3 code:4 besides:1 index:1 relationship:2 providing:1 nc:15 riccardo:1 unfortunately:1 robert:3 frank:1 negative:1 upper:3 markov:5 pearce:3 defining:1 consulted:1 bert:1 arbitrary:14 august:1 jebara:2 introduced:3 david:2 namely:7 pair:7 specified:2 california:1 pletely:1 established:1 pearl:1 nip:1 usually:1 below:1 xm:17 pattern:1 program:1 max:57 including:1 belief:9 wainwright:3 power:1 nemanja:1 yanover:1 improve:1 x8:5 prior:1 understanding:1 removal:1 law:1 embedded:1 interesting:1 acyclic:4 proven:2 bayati:1 agent:2 jesus:1 editor:2 share:3 unwrapping:1 row:6 supported:1 last:1 keeping:1 asynchronous:1 iiia:6 formal:1 weaker:1 side:1 allow:1 institute:3 wide:1 taking:1 slt:26 absolute:1 distributed:2 calculated:2 stand:3 computes:1 concretely:3 author:1 san:1 fixedpoint:1 far:1 transaction:4 approximate:5 obtains:1 dmitry:1 global:1 incoming:1 rid:1 generalising:1 assumed:1 francisco:1 xi:22 alternatively:1 iterative:3 fpu:1 rearranging:1 ca:2 kschischang:1 obtaining:1 domain:1 significance:1 main:4 bounding:1 n2:7 aamas:2 x1:32 xu:1 depicts:2 x15:5 maxxi:1 removing:2 specific:18 xt:7 borgs:1 showing:1 list:1 x:15 virtue:1 grouping:1 workshop:1 corr:2 phd:1 execution:1 illustrates:1 cartesian:1 chen:1 jar:1 smoothly:1 generalizing:1 depicted:1 yin:1 likely:2 vinyals:2 expressed:2 contained:1 partially:4 applies:2 mij:2 corresponds:1 satisfies:1 worstcase:1 ma:1 formulated:1 shared:1 hard:2 included:2 specifically:1 called:1 e:3 arbitary:1 select:1 assessed:2 jonathan:1 spielman:1 srinivas:2
3,288
3,979
Sphere Embedding: An Application to Part-of-Speech Induction Yariv Maron Gonda Brain Research Center Bar-Ilan University Ramat-Gan 52900, Israel [email protected] Michael Lamar Department of Mathematics and Computer Science Saint Louis University St. Louis, MO 63103, USA [email protected] Elie Bienenstock Division of Applied Mathematics And Department of Neuroscience Brown University Providence, RI 02912, USA [email protected] Abstract Motivated by an application to unsupervised part-of-speech tagging, we present an algorithm for the Euclidean embedding of large sets of categorical data based on co-occurrence statistics. We use the CODE model of Globerson et al. but constrain the embedding to lie on a hig hdimensional unit sphere. This constraint allows for efficient optimization, even in the case of large datasets and high embedding dimensionality. Using k-means clustering of the embedded data, our approach efficiently produces state-of-the-art results. We analyze the reasons why the sphere constraint is beneficial in this application, and conjecture that these reasons might apply quite generally to other large-scale tasks. 1 In trod u cti on The embedding of objects in a low-dimensional Euclidean space is a form of dimensionality reduction that has been used in the past mostly to create 2D representations of data for the purpose of visualization and exploratory data analysis [10, 13]. Most methods work on objects of a single type, endowed with a measure of similarity. Other methods, such as [ 3], embed objects of heterogeneous types, based on their co-occurrence statistics. In this paper we demonstrate that the latter can be successfully applied to unsupervised part-of-speech (POS) induction, an extensively studied, challenging, problem in natural language processing [1, 4, 5, 6, 7]. The problem we address is distributional POS tagging, in which words are to be tagged based on the statistics of their immediate left and right context in a corpus (ignoring morphology and other features). The induction task is fully unsupervised, i.e., it uses no annotations. This task has been addressed in the past using a variety of methods. Some approaches, such as [1], combine a Markovian assumption with clustering. Many recent works use HMMs, perhaps due to their excellent performance on the supervised version of the task [7, 2, 5]. Using a latent-descriptor clustering approach, [15] obtain the best results to date for distributional-only unsupervised POS tagging of the widely-used WSJ corpus. Using a heterogeneous-data embedding approach for this task, we define separate embedding functions for the objects "left word" and "right word" based on their co -occurrence statistics, i.e., based on bigram frequencies. We are interested in modeling the statistical interactions between left words and right words, as relevant to POS tagging, rather than their joint distribution. Indeed, modeling the joint distribution directly results in models that do not handle rare words well. We use the CODE (Co-Occurrence Data Embedding) model of [3], where statistical interaction is modeled as the negative exponential of the Euclidean distance between the embedded points. This embedding model incorporates the marginal probabilities, or unigram frequencies, in a way that results in appropriate handling of both frequent and rare words. The size of the dataset (number of points to embed) and the embedding dimensionality are several-fold larger than in the applications studied in [3], making the optimization methods used by these authors impractical. Instead, we use a simple and intuitive stochastic-gradient procedure. Importantly, in order to handle both the large dataset and the relatively high dimensionality of the embedding needed for this application, we constrain the embedding to lie on the unit sphere. We therefore refer to this method as Spherical CODE, or S-CODE. The spherical constraint causes the regularization term?the partition function?to be nearly constant and also makes the stochastic gradient ascent smoother ; this allows a several-fold computational improvement, and yields excellent performance. After convergence of the embedding model, we use a k-means algorithm to cluster all the words of the corpus, based on their embeddings. The induced POS labels are evaluated using the standard setting for this task, yielding state-of-the-art tagging performance. 2 Meth od s 2.1 Model We represent a bigram, i.e., an ordered pair of adjacent words in the corpus, as joint random variables (X,Y), each taking values in W, the set of word types occurring in the corpus. Since X and Y, the first and second words in a bigram, play different roles, we build a heterogeneous model, i.e., use two embedding functions, and . Both map W into S, the unit sphere in the r-dimensional Euclidean space. We use for the word-type frequencies: is the number of word tokens of type x divided by the total number of tokens in the corpus. We refer to as the empirical marginal distribution, or unigram frequency. We use for the empirical joint distribution of X and Y, i.e., the distribution of bigrams (X,Y). Because our ultimate goal is the clustering of word types for POS tagging, we want the embedding to be insensitive to the marginals: two word types with similar context distributions should be mapped to neighboring points in S even if their unigram frequencies are very different. We therefore use the marginal-marginal model of [3], defined by: (1) (2) (3) The log-likelihood, ?, of the corpus of bigrams is the expected value, under the empirical bigram distribution, of the log of the model bigram probability: (4) The model is parameterized by 2?|W| points on the unit sphere S in r dimensions: and . These points are initialized randomly, i.e., independently and uniformly on S. To maximize the likelihood, we use a gradient-ascent approach. The gradient of the log likelihood is as follows (observe that the last term in (4) does not depend on the model, hence does not contribute to the gradient): (5) (6) For sufficiently large problems such as POS tagging of a large corpus, computing the partition function, Z, after each gradient step or even once every fixed number of steps can be impractical. Instead, it turns out (see Discussion) that, thanks to the sphere constraint, we can approximate this dynamic variable, Z, using a constant, , which arises from a coarse approximation in which all pairs of embedded variables are distributed uniformly and independently on the sphere. Thus, we set with and i.i.d. uniformly on S, and get our estimate as the expected value of the resulting random variable, : . (7) Numerical evaluation of (7) yields for the 25-dimensional sphere. An even coarser approximation can be obtained by noting that, for large r, the random variable is fairly peaked around 2 (the random variable is close to a Student's t with r degrees of freedom, compressed by a factor of ). This yields the estimate . For the present application, we find that performance does not suffer from using a constant rather than recomputing Z often during gradient-ascent. It is also fairly robust to the choice of . We observe only minor changes in performance for ranging over [0.1, 0.5]. We use sampling to compute a stochastic approximation of the gradient. To implement the first sum in (5) and (6) ? representing an attraction force between the embeddings of the words in a bigram ? we sample bigrams from the empirical joint . Given a sample , only the and parameter vectors are updated. The partial updates that emerge from these two sums are: (8) (9) , where is the step size. In order to speed up the convergence process, we use a learning rate that decreases as word types are repeatedly observed. If is the number of times word type w has been previously encountered, we use: . (10) The model is very robust to the choice of the function (C), as long as it decreases smoothly. This modified learning rate also reduces the variability of the tagging accuracy, while slightly increasing its mean. The second sum in (5) and in (6) ? representing a repulsion force ? involves not the empirical joint but the product of the empirical marginals. T hus, the complete update is: (11) , (12) where is sampled from the joint , and x 2 and y2 are sampled from the marginal independently from each other and independently from x1 and y1. After each step, the updated vectors are projected back onto the sphere S. After convergence, for any word w, we have two embedded vectors, and . These vectors are concatenated to form a single geometric description of word type w. The collection of all these vectors is then clustered using a weighted k-means clustering algorithm: in each iteration, a cluster?s centroid is updated as the weighted mean of its currently assigned constituent vectors, with the weight of the vector for word w equal to . The number of clusters chosen depends on whether evaluation is to be done against the PTB45 or the PTB17 tagset (see below, Section 2.2). 1 2.2 Evaluation and data The resulting assignment of cluster labels to word types is used to label the corpus. The standard practice for evaluating the performance of the induced labels is to either map them to the gold-standard tags, or to use an information-theoretic measure. We use the three evaluation criteria that are most common in the recent literature. The first criterion maps each cluster to the POS tag that it best matches according to the hand -annotated labels. The match is determined by finding the tag that is most frequently assigned to any token of any word type in the cluster. Because the criterion is free to assign several clusters to the same POS tag, this evaluation technique is called many-to-one mapping, or MTO. Once the map is constructed, the accuracy score is obtained as the fraction of all tokens whose inferred tag under the map matches the hand-annotated tag. The second criterion, 1-to-1 mapping, is similar to the first, but the mapping is restricted from assigning multiple clusters to a single tag; hence it is called one -to-one mapping, or 1to-1. Most authors construct the 1-to-1 mapping greedily, assigning maximal-score label-totag matches first; some authors, e.g. [15], use the optimal map. Once the map is constructed, the accuracy is computed just as in MTO. The third criterion, variation of information, or VI, is a map-free information-theoretic metric [9, 2]. We note that we and other authors found the most reliable criterion for comparing unsupervised POS taggers to be MTO. However, we include all three criteria for completeness. We use the Wall Street Journal part of the Penn Treebank [8] (1,173,766 tokens). We ignore capitalization, leaving 43,766 word types, to compare performance with other models consistently. Evaluation is done against the full tag set (PTB45), and against a coarse tag set (PTB17) [12]. For PTB45 evaluation, we use either 45 or 50 clusters, in order for our results to be comparable to all recent works. For PTB17 evaluation, we use 17 clusters, as do all other authors. 3 Resu l ts Figure 1 shows the model performance when evaluated with several measures. MTO17 and MTO50 refer to the number of tokens tagged correctly under the many-to-1 mapping for the PTB45 and PTB17 tagsets respectively. The type-accuracy curves use the same mapping 1 Source code is available at the author?s website: faculty.biu.ac.il/~marony. and tagsets, but record the fraction of word types whose inferred tag matches thei r "modal" annotated tag, i.e., the annotated tag co-occurring most frequently with this word type. We also show the scaled log likelihood, to illustrate its convergence. These results were produced using a constant, pre-computed, . Using this constant value allows the model to run in a matter of minutes rather than the hours or days required by HMMs and MRFs. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 log-likelihood MTO17 MTO50 Type Accuracy 17 Type Accuracy 50 0.2 0.1 0 0 20 40 60 80 bigram updates (times 100,000) 100 120 Figure 1: Scores against number of iterations (bigram updates). Scores are averaged over 10 sessions, and shown with 1-std error bars. MTO17 is the Many-to-1 tagging accuracy score based on 17 induced labels mapped to 17 tags. MTO50 is the Many-to-1 score based on 50 induced labels mapped to 45 tags. Type Accuracy 17 (50) is the average accuracy per word type, where the gold-standard tag of a word type is the modal annotated tag of that type (see text). All runs used = 0.154, r=25. 0.75 0.7 0.65 0.6 MTO17,r=25 MTO17,r=10 MTO17,r=5 MTO17,r=2 0.55 0.5 0 10 20 30 40 bigram updates (times 100,000) 50 60 Figure 2: Comparison of models with different dimensionalities: r = 2, 5, 10, 25. MTO17 is the Many-to-1 score based on 17 induced labels mapped to PTB17 tags. Figure 2 shows the model performance for different dimensionalities r. As r increases, so does the performance. Unlike previous applications of CODE [3] (which often emphasize visualization of data and thus require a low dimension), this unsupervised POS -tagging application benefits from high values of r. Larger values of r cause both the tagging accuracy to improve and the variability during convergence to decrease. Many-to-1 Model 1-to-1 VI PTB17 PTB45 PTB45 PTB17 PTB45 PTB45 PTB17 -45 -50 -45 -50 PTB45 PTB45 -45 -50 S-CODE 73.8 (Z=0.1456) (0.5) 68.8 (0.16) 70.4 (0.5) 52.2 50.0 50.0 2.93 3.46 3.46 S-CODE (Z=0.3) 74.5 (0.2) 68.6 (0.16) 71.5 (0.6) 54.9 48.7 48.8 2.80 3.38 3.39 LDC 75.1 (0.04) 68.1 (0.2) 59.3 71.2 (0.06) 67.8 70.5 3.47 3.45 Brown 48.3 50.1 51.3 HMM-EM 64.7 62.1 43.1 40.5 3.86 4.48 HMM-VB 63.7 60.5 51.4 46.1 3.44 4.28 HMM-GS 67.4 66.0 44.6 49.9 3.46 4.04 HMM70.2 Sparse(32) 65.4 49.5 44.5 VEM 68.2 (10-1,10-1) 54.6 52.8 46.0 Table 1: Comparison to other models, under three different evaluation measures. S-CODE uses r = 25 dimensions. It was run 10 times, each with 12?10 6 update steps. LDC is from [15]; Brown shows the best results from [14] and website mentioned therein; HMM-EM, HMM-VB and HMM-GS show the best results from [2]; HMM-Sparse(32) and VEM show the best results from [5]. The numbers in parentheses are standard deviations. For the VI criterion, lower values are better. PTB45-45 maps 45 induced labels to 45 tags, while PTB45-50 maps 50 induced labels to 45 tags. Table 1 compares our model, S-CODE, to previous state-of-the-art approaches. Under the Many-to-1 criterion, which we find to be the most appropriate of the three for the evaluation of unsupervised POS taggers, S-CODE is superior to HMM results, and scores comparably to [15], the highest-performing model to date on this task. We find that the model is very robust to the choice of within the range 0.1 to 0.5. This robustness lends promise for the usefulness of this method for other applications in which the partition function is impractical to compute. This point is discussed further in the next section. 4 Di scu ssi on The problem of embedding heterogeneous categorical data (X,Y) based on their cooccurrence statistics may be formulated as the task of finding a pair of maps and such that, for any pair (x,y), the distance between the images of x and y reflects the statistical interaction between them. Such embeddings have been used mostly for the purpose of visualization and exploratory data analysis. Here we demonstrate that emb edding can be successfully applied to a well-studied computational-linguistics task, achieving state-of-theart performance. 4.1 S - C O D E v. C O D E The approach proposed here, S-CODE, is a variant of the CODE model of [3]. In the task at hand, the sets X and Y to be embedded are large (43K), making most conventional embedding approaches, including CODE (as implemented in [3]), impractical. As explained below, S-CODE overcomes the large-dataset challenge by constraining the maps to lie on the unit sphere. It uses stochastic gradient ascent to maximize the likelihood of the model. The gradient of the log-likelihood w.r.t. a given includes two components, each with a simple intuitive meaning. The first component embodies an attraction force, pulling toward in proportion to the empirical joint . The second component, the gradient of the regularization term, , embodies a repulsion force; it keeps the solution away from the trivial state where all x's and y's are mapped to the same point, and more generally attempts to keep Z small. The repulsion force pushes away from in proportion to the product of the empirical marginals and , and is scaled by . The computational complexity of Z, the partition function, is . In the application studied here, the use of the spherical constraint of S -CODE has two important consequences. First, it makes the computation of Z unnecessary. Indeed, when using the spherical constraint, we observed that Z, when actually computed and updated every 10 6 steps, does not deviate much from its initial value. For example, for r = 25, Z rises smoothly from 0.145 to 0.182. Note that the absolute minimum of Z?obtained for a that maps all of W to a single point on S and a that maps all of W to the opposite point?is ; the absolute maximum of Z, obtained for and that map all of W to the same point, is 1. We also observed that replacing Z, in the update algorithm, by any constant in the range [.1 .5] does not dramatically alter the behavior of the model. We nevertheless note that larger values of tend to yield a slightly higher performance of the POS tagger built from the model. Note that the only effect of changing in the stochastic gradient algorithm is to change the relative strength of the attraction and repulsion terms. We compared the performance of S-CODE with CODE. The original CODE implementation [3] could not support the size of our data set. To overcome this limitation, we used the stochastic-gradient method described above, but without projecting to the sphere. This required us to compute the partition function, which is highly computationally intensive. We therefore computed the partition function only once every q update steps (where one update step is the sampling of one bigram). We found that for q = 10 5 the partition function and likelihood changed smoothly enough and converged, and the embeddings yielded tagging performances that did not differ significantly from those obtained with S -CODE. The second important consequence of imposing the spherical constraint is that it makes the stochastic gradient-ascent procedure markedly smoother. As a result, a relatively large step size can be used, achieving convergence and excellent tagging performance in about 10 minutes of computation time on a desktop machine. CODE requires a smaller step size as well as the recomputation of the partition function, and, as a result, computation time in this application was 6 times longer than with S-CODE. When gauging the applicability of S-CODE to different large-scale embedding problems, one should try to gain some understanding of why the spherical constraint stabilizes the partition function, and whether Z will stabilize around the same value for other problems. The answer to the first question appears to be that the regularization term is not so strong as to prevent clusters from forming?this is demonstrated by the excellent performance of the model when used for POS tagging?yet it is strong enough to enforce a fairly uniform distribution of these clusters on the sphere?resulting in a fairly stable value of Z. One may reasonably conjecture that this behavior will generalize to other problems. To answer the second question, we note that the order of magnitude of Z is essentially set by the coarsest of the two estimates derived in Section 2, namely 0.135, and that this estimate is problem-independent. As a result, S-CODE is, in principle, applicable to datasets of much larger size than the present problem. The computational complexity of the algorithm is O(Nr), and the memory requirement is O(|W|r) where N is the number of word tokens, and |W| is the number of word types. In contrast, and as mentioned above, CODE, even in our stochastic-gradient version, is considerably more computationally intensive; it would clearly be completely impractical for much larger datasets. 4.2 Co mparison to other POS induction models Even though embedding models have been studied extensively, they are not widely used for POS tagging (see however [18]). For the unsupervised POS tagging task, HMMs have until recently dominated the field. Here we show that an embedding model substantially outperforms HMMs, and achieves the same level of performance as the best distributionalonly model to date [15]. Models that use features, e.g. morphological, achieve higher tagging precision [11, 14]. Incorporating features into S-CODE can easily be done, either directly or in a two-step approach as in [14]; this is left for future work. One of the widely-acknowledged challenges in applying HMMs to the unsupervised POS tagging problem is that these models do not afford a convenient vehicle to modeling an important sparseness property of natural languages, namely the fact that any given word type admits of only a small number of POS tags?often only one (see in particular [7, 2, 4]). In contrast, the approach presented here maps each word type to a single point in . Hence, it assigns a single tag to each word type, like a number of other recent approaches [15, 16, 17]. These approaches are incapable of disambiguating, i.e., of assigning different tags to the same word depending on context, as in "I long to see a long movie." HMMs are, in principle, capable of doing so, but at the cost of over-parameterization. In view of the superior performance of S-CODE and of other type-level approaches, it appears that underparameterization might be the better choice for this task. Another difference between our model and HMMs previously applied to this problem is that our model is symmetric, thereby modeling right and left context distributions. In contrast, HMMs are asymmetric in that they typically model a left-to-right transition and would find a different solution if a right-to-left transition were modeled. We argue that using both distributions in a symmetric way better captures the important linguistic information. In the past, left and right distributions were extracted by factoring the bigram matrix and using the left and right eigenvectors. Such a linear method does not handle rare words well. Instead, we choose to learn the ratio . This approach allows words with similar contexts but different unigram frequencies to be embedded near each other. Like HMMs, CODE provides a model of the distribution of the data at hand. S -CODE departs slightly from this framework. Since it does not use the exact partition function in the stochastic gradient ascent procedure?and was actually found to perform best when replacing Z, in the update rule, by a constant that is substantially larger than the true value of Z?it only approximately converges to a local maximum of a likelihood function. In future work, and as a more radical deviation from the CODE model, one may then give up altogether modeling the distribution of X and Y, instead relying on a heuristically motivated objective function of sphere-constrained embeddings and , to be maximized. Preliminary studies using a number of alternative functional forms for the regularization term yielded promising results. Although S-CODE and LDC [15] achieve essentially the same level of performance on taggings that induce 17, 45, or 50 labels (Table 1), S-CODE proves superior for the induction of very fine-grained taggings. Thus, we compared the performances of S-CODE and LDC on the task of inducing 300 labels. Under the MTO criterion, LDC achieved 80.9% (PTB45) and 87.9% (PTB17). S-CODE significantly outperformed it, with 83.5% (PTB45) and 89.8% (PTB17). The appeal of S-CODE lies not only in its strong performance on the unsupervised POS tagging problem, but also in its simplicity, its robustness, and its math ematical grounding. The mathematics underlying CODE, as developed in [3], are intuitive and relatively simple . Modeling the joint probability of word type co-occurrence through distances between Euclidean embeddings, without relying on discrete categories or states, is a novel and promising approach for POS tagging. The spherical constraint introduced here permits the approximation of the partition function by a constant, which is the key to the efficiency of the algorithm for large datasets. The stochastic-gradient procedure produces two competing forces with intuitive meaning, familiar from the literature on learning in generative models. While the accuracy and computational efficiency of S-CODE is matched by the recent LDC algorithm [15], S-CODE is more robust, showing very little change in performance over a wide range of implementation choices. We expect that this improved robustness will allow S-CODE to be easily and successfully applied to other large-scale tasks, both linguistic and non-linguistic. R e f e re n c e s [1] Alexander Clark. 2003. Combining distributional and morphological information for part of speech induction. In 10th Conference of the European Chapter of the Association for Computational Linguistics, pages 59?66. [2] Jianfeng Gao and Mark Johnson. 2008. A comparison of bayesian estimators for unsupervised Hidden Markov Model POS taggers. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 344?352. [3] Amir Globerson, Gal Chechik, Fernando Pereira, and Naftali Tishby. 2007. Euclidean embedding of cooccurrence data. Journal of Machine Learning Research, 8:2265?2295. [4] Sharon Goldwater and Tom Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tagging. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744?751. [5] Jo?o V. Gra?a, Kuzman Ganchev, Ben Taskar, and Fernando Pereira. 2009. Posterior vs. Parameter Sparsity in Latent Variable Models. In Neural Information Processing Systems Conference (NIPS). [6] Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 320?327. [7] Mark Johnson. 2007. Why doesn?t EM find good HMM POS-taggers? In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 296?305. [8] M.P. Marcus, M.A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics, 19(2):313?330. [9] Marina Meil?. 2003. Comparing clusterings by the variation of information. In Bernhard Sch?lkopf and Manfred K. Warmuth, editors, COLT 2003: The Sixteenth Annual Conference on Learning Theory, volume 2777 of Lecture Notes in Computer Science, pages 173?187. Springer. [10] Sam T. Roweis and Lawrence K. Saul. 2000. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323?2326. [11] Taylor Berg-Kirkpatrick, Alexandre Bouchard-C?t?, John DeNero, and Dan Klein. 2010. Painless Unsupervised Learning with Features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 582-590. [12] Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL?05), pages 354?362. [13] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. 2000. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319?2323. [14] Christos Christodoulopoulos, Sharon Goldwater and Mark Steedman. 2010. Two Decades of Unsupervised POS induction: How far have we come? In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP 2010), pages 575?584. [15] Michael Lamar, Yariv Maron and Elie Bienenstock. 2010. Latent-Descriptor Clustering for Unsupervised POS Induction. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 799?809. [16] Yoong Keok Lee, Aria Haghighi, and Regina Barzilay. 2010. Simple Type-Level Unsupervised POS Tagging. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 853-861. [17] Michael Lamar, Yariv Maron, Mark Johnson, Elie Bienenstock. 2010. SVD and clustering for unsupervised POS tagging. In Proceedings of the ACL 2010 Conference Short Papers, pages 215-219. [18] Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the Twenty-fifth International Conference on Machine Learning (ICML 2008), pages 160?167.
3979 |@word multitask:1 faculty:1 version:2 bigram:14 proportion:2 heuristically:1 contrastive:1 thereby:1 reduction:3 initial:1 score:8 past:3 outperforms:1 com:1 od:1 comparing:2 assigning:3 yet:1 john:2 numerical:1 partition:11 ronan:1 update:10 v:1 generative:1 website:2 parameterization:1 amir:1 desktop:1 warmuth:1 smith:1 short:1 record:1 manfred:1 coarse:2 completeness:1 contribute:1 provides:1 math:1 tagger:5 constructed:2 combine:1 dan:2 tagging:23 expected:2 indeed:2 behavior:2 frequently:2 morphology:1 brain:1 relying:2 spherical:7 little:1 increasing:1 underlying:1 matched:1 israel:1 substantially:2 developed:1 unified:1 finding:2 gal:1 impractical:5 every:3 scaled:2 unit:5 penn:2 louis:2 local:1 consequence:2 meil:1 approximately:1 might:2 acl:2 therein:1 studied:5 challenging:1 co:7 ramat:1 hmms:9 range:3 averaged:1 elie:4 globerson:2 yariv:3 practice:1 implement:1 procedure:4 empirical:13 significantly:2 convenient:1 chechik:1 word:37 pre:1 induce:1 griffith:1 get:1 onto:1 close:1 scu:1 unlabeled:1 context:5 applying:1 conventional:1 map:16 demonstrated:1 center:1 independently:4 simplicity:1 assigns:1 rule:1 attraction:3 estimator:1 importantly:1 embedding:22 handle:3 exploratory:2 ptb17:10 variation:2 updated:4 play:1 exact:1 us:3 std:1 asymmetric:1 distributional:3 coarser:1 observed:3 role:1 taskar:1 capture:1 morphological:2 decrease:3 highest:1 mentioned:2 complexity:2 cooccurrence:2 dynamic:1 depend:1 division:1 efficiency:2 completely:1 po:27 joint:10 easily:2 chapter:2 jianfeng:1 quite:1 whose:2 widely:3 larger:6 compressed:1 statistic:5 sequence:1 interaction:3 product:2 maximal:1 frequent:1 neighboring:1 relevant:1 combining:1 date:3 achieve:2 roweis:1 gold:2 sixteenth:1 intuitive:4 description:1 inducing:1 constituent:1 edding:1 convergence:6 cluster:12 requirement:1 produce:2 wsj:1 converges:1 ben:1 object:4 illustrate:1 depending:1 ac:1 radical:1 minor:1 barzilay:1 strong:3 implemented:1 involves:1 come:1 differ:1 denero:1 annotated:6 stochastic:10 human:2 require:1 regina:1 assign:1 clustered:1 wall:1 preliminary:1 marcinkiewicz:1 sufficiently:1 around:2 lawrence:1 mapping:7 mo:1 stabilizes:1 achieves:1 purpose:2 estimation:1 outperformed:1 applicable:1 label:13 currently:1 create:1 successfully:3 ganchev:1 weighted:2 reflects:1 clearly:1 modified:1 rather:3 linguistic:3 derived:1 improvement:1 consistently:1 likelihood:9 contrast:3 centroid:1 greedily:1 mrfs:1 repulsion:4 factoring:1 typically:1 bienenstock:3 hidden:1 interested:1 colt:1 yahoo:1 art:3 constrained:1 fairly:4 marginal:5 equal:1 once:4 construct:1 field:1 sampling:2 unsupervised:17 nearly:1 theart:1 icml:1 peaked:1 alter:1 gauging:1 future:2 randomly:1 familiar:1 attempt:1 freedom:1 highly:1 steedman:1 evaluation:10 kirkpatrick:1 yielding:1 capable:1 partial:1 trod:1 euclidean:6 taylor:1 initialized:1 re:1 recomputing:1 lamar:3 modeling:6 markovian:1 assignment:1 applicability:1 cost:1 deviation:2 rare:3 uniform:1 usefulness:1 johnson:3 tishby:1 providence:1 answer:2 considerably:1 st:1 thanks:1 international:1 lee:1 michael:3 jo:1 recomputation:1 choose:1 emnlp:2 american:1 ilan:1 de:1 student:1 stabilize:1 includes:1 north:1 matter:1 depends:1 collobert:1 vi:3 vehicle:1 try:1 view:1 jason:2 analyze:1 doing:1 bouchard:1 annotation:1 vin:1 vem:2 il:1 accuracy:11 gonda:1 descriptor:2 efficiently:1 maximized:1 yield:4 generalize:1 goldwater:2 bayesian:2 lkopf:1 produced:1 comparably:1 converged:1 against:4 frequency:6 taggings:2 di:1 sampled:2 gain:1 dataset:3 dimensionality:8 actually:2 back:1 appears:2 alexandre:1 higher:2 supervised:1 day:1 tom:1 modal:2 improved:1 evaluated:2 done:3 though:1 just:1 until:1 langford:1 hand:4 replacing:2 nonlinear:2 maron:3 perhaps:1 pulling:1 building:1 usa:2 effect:1 brown:4 y2:1 true:1 grounding:1 naacl:1 tagged:2 regularization:4 hence:3 assigned:2 symmetric:2 adjacent:1 during:2 naftali:1 criterion:10 complete:1 demonstrate:2 theoretic:2 silva:1 ranging:1 image:1 meaning:2 novel:1 recently:1 common:1 superior:3 functional:1 insensitive:1 volume:1 discussed:1 association:4 marginals:3 refer:3 imposing:1 rd:1 mathematics:3 session:1 language:11 hus:1 stable:1 similarity:1 longer:1 posterior:1 recent:5 driven:1 incapable:1 meeting:2 joshua:1 minimum:1 maximize:2 fernando:2 smoother:2 multiple:1 full:1 reduces:1 match:5 sphere:14 long:3 divided:1 marina:1 parenthesis:1 variant:1 heterogeneous:4 essentially:2 metric:1 iteration:2 represent:1 achieved:1 want:1 fine:1 addressed:1 leaving:1 source:1 haghighi:2 sch:1 unlike:1 ascent:6 capitalization:1 induced:7 tend:1 markedly:1 incorporates:1 near:1 noting:1 constraining:1 embeddings:6 enough:2 variety:1 architecture:1 competing:1 opposite:1 prototype:1 intensive:2 whether:2 motivated:2 ultimate:1 suffer:1 speech:5 cause:2 afford:1 repeatedly:1 deep:1 dramatically:1 generally:2 eigenvectors:1 extensively:2 locally:1 tenenbaum:1 category:1 neuroscience:1 correctly:1 per:1 klein:2 discrete:1 promise:1 key:1 nevertheless:1 achieving:2 acknowledged:1 changing:1 prevent:1 sharon:2 fraction:2 sum:3 run:3 parameterized:1 gra:1 vb:2 comparable:1 conll:1 fold:2 encountered:1 g:2 yielded:2 hig:1 strength:1 annual:4 noah:1 constraint:9 constrain:2 ri:1 tag:22 dominated:1 speed:1 performing:1 coarsest:1 relatively:3 conjecture:2 department:2 according:1 beneficial:1 slightly:3 em:3 smaller:1 sam:1 making:2 explained:1 restricted:1 projecting:1 computationally:2 visualization:3 previously:2 turn:1 needed:1 available:1 endowed:1 permit:1 apply:1 observe:2 away:2 appropriate:2 enforce:1 occurrence:5 alternative:1 robustness:3 altogether:1 original:1 clustering:8 include:1 linguistics:6 gan:1 saint:1 embodies:2 concatenated:1 eisner:1 build:1 prof:1 objective:1 question:2 nr:1 gradient:17 lends:1 distance:3 separate:1 mapped:5 street:1 hmm:9 argue:1 trivial:1 reason:2 induction:8 toward:1 marcus:1 code:39 modeled:2 biu:1 ratio:1 kuzman:1 mostly:2 thei:1 negative:1 rise:1 implementation:2 twenty:1 perform:1 datasets:4 markov:1 t:1 immediate:1 variability:2 santorini:1 y1:1 inferred:2 introduced:1 pair:4 required:2 namely:2 hour:1 nip:1 address:1 bar:2 below:2 sparsity:1 challenge:2 built:1 reliable:1 including:1 memory:1 ldc:6 natural:9 force:6 meth:1 representing:2 improve:1 movie:1 technology:2 mto:4 categorical:2 slu:1 text:1 deviate:1 geometric:2 literature:2 understanding:1 relative:1 embedded:6 fully:2 expect:1 lecture:1 limitation:1 resu:1 clark:1 degree:1 principle:2 treebank:2 editor:1 changed:1 token:7 last:1 free:2 english:1 allow:1 emb:1 wide:1 saul:1 taking:1 emerge:1 absolute:2 sparse:2 fifth:1 distributed:1 benefit:1 curve:1 dimension:3 overcome:1 evaluating:1 ssi:1 transition:2 doesn:1 author:6 collection:1 projected:1 far:1 approximate:1 emphasize:1 ignore:1 bernhard:1 overcomes:1 keep:2 global:1 corpus:10 unnecessary:1 latent:3 decade:1 why:3 table:3 promising:2 learn:1 reasonably:1 robust:4 ignoring:1 excellent:4 european:1 did:1 main:1 x1:1 precision:1 christos:1 pereira:2 exponential:1 lie:4 third:1 grained:1 minute:2 departs:1 embed:2 unigram:4 showing:1 appeal:1 admits:1 incorporating:1 aria:2 magnitude:1 occurring:2 push:1 sparseness:1 painless:1 smoothly:3 forming:1 tagsets:2 gao:1 ordered:1 springer:1 extracted:1 cti:1 weston:1 goal:1 formulated:1 disambiguating:1 change:3 determined:1 uniformly:3 total:1 called:2 svd:1 berg:1 support:1 mark:4 latter:1 arises:1 alexander:1 tagset:1 handling:1
3,289
398
A Lagrangian Approach to Fixed Points Eric Mjolsness Department of Computer Science Yale University P.O. Box 2158 Yale Station New Haven, CT 16520-2158 Willard L. Miranker IBM Watson Research Center Yorktown Heights, NY 10598 Abstract We present a new way to derive dissipative, optimizing dynamics from the Lagrangian formulation of mechanics. It can be used to obtain both standard and novel neural net dynamics for optimization problems. To demonstrate this we derive standard descent dynamics as well as nonstandard variants that introduce a computational attention mechanism. 1 INTRODUCTION Neural nets are often designed to optimize some objective function E of the current state of the system via a dissipative dynamical system that has a circuit-like implementation. The fixed points of such a system are locally optimal in E. In physics the preferred formulation for many dynamical derivations and calculations is by means of an objective function which is an integral over time of a "Lagrangian" function, L. From Lagrangians one usually derives time-reversable, non-dissipative dynamics which cannot converge to a fixed point, but we present a new way to circumvent this limitation and derive optimizing neural net dynamics from a Lagrangian. We apply the method to derive a general attention mechanism for optimization-based neural nets, and we describe simulations for a graph-matching network. 2 LAGRANGIAN FORMULATION OF NEURAL DYNAMICS Often one must design a network with nontrivial temporal behaviors such as running longer in exchange for less circuitry, or focussing attention on one part of a 77 78 Mjolsness and Miranker problem at a time. In this section we transform the original objective function (c.f. (Mjolsness and Garrett, 1989]) into a Lagrangian which determines the detailed dynamics by which the objective is optimized. In section 3.1 we will show how to add in an extra level of control dynamics. 2.1 THE LAGRANGIAN Replacing an objective E with an associated Lagrangian, L, is an algebraic transformation: (1) E[v] - L[v, vlq] K[v, vlq] + ~~. = The "action" 8 = Joo -00 Ldt is to be extremized in a novel way: (2) In (1), q is an optional set of control parameters (see section 3.1) and K is a costof-movement term independent of the problem and of E. For one standard class of neural networks, E[v] = -(1/2) L TijViVj - - 8E/8vi = L TijVj + hi - g-l(Vi), ij so L hivi + L ?i(Vi) (3) (4) j where g-l(v) 2.2 = ?'(v). Also dE/dt is of course Ei(8E/8vi)Vi. THE GREEDY FUNCTIONAL DERIVATIVE In physics, Lagrangian dynamics usually have a conserved total energy which prohibits convergence to fixed points. Here the main difference is the unusual functional derivative with respect to v rather than v in equation (2). This is a "greedy" functional derivative, in which the trajectory is optimized from beginning to each time t by choosing an extremal value of v(t) without considering its effect on any subsequent portion of the trajectory: 6 6Vi(t) 1t -00 d'L[' ] t v, v ~ ~()8L[v,v] u 0 Since 8Vi(t) 68 6Vi 8L = 8Vi 6 = u~() 0 6Vi(t) 1 00 -00 8K = 8Vi d' [ ' ] 68 t L v, v oc 6Vi(t)' 8E + 8Vi' equations (1) and (2) preserve fixed points (where 8E/8vi v = o. 2.3 () 5 (6) = 0) if 8K/8vi = 0 ?} STEEPEST DESCENT DYNAMICS For example, with K dynamics: = Ei ?(vdr) one may recover and generalize steepest-descent E[v] - L[vlr) = 4= ?(vdr) + 4= ~~ Vi, ? ? (7) A Lagrangian Approach to Fixed Points .' ..... ,t.'::. ,. t (b) (a) Figure 1: (a) Greedy functional derivatives result in greedy optimization: the "next" point in a trajectory is chosen on the basis of previous points but not future ones. (b) Two time variables t and T may increase during nonoverlapping interJ dT(h(T) and vals of an underlying physical time variable, T. For example t T dT<p2(T) where <Pl and <P2 are nonoverlapping clock signals. = =J 8L/8vi(t) = 0 ~ <p'(vdr)/r + 8E/8vi = 0, l.e. Vi = rg( - r 8E/8vi ). As usual 9 = (<p') -1. A transfer function with -1 velocity constraint -r < Vi < r . 2.4 (8) (9) < g( x) < 1 could enforce a HOPFIELD/GROSSBERG DYNAMICS With a suitable J( one may recover the analog neuron dynamics of Hopfield (and Grossberg): L ~ 1 ' 2,( ) ~ 8E . =L.J -2 Ui 9 Ui + L.J -8. Vi, . I ? I VI Vi _ =9 ( ) Ui ? = 0 ~ Ui + 8E/8vi = 0, i.e. Ui = -8E/8vi and Vi = g(Ui) . 8L/8ui(t) (10) (11) (12) We conjecture that this function J( [Ui, ud is optimal in a certain sense: if we linearize the u dynamics and consider the largest and smallest eigenvalues, extremized separately over the entire domain of u, with -T constrained to have bounded positive eigenvalues, then the ratio of such largest and smallest eigenvalues is minimal for this J(. This criterion is of practical importance because the largest eigenvalue should be bounded for circuit implement ability, and the smallest eigenvalue should be bounded away from zero for circuit convergence in finite time. 79 80 Mj olsness and Miranker 2.5 A CHANGE OF VARIABLES SIMPLIFIES L We note a change of variable which simplifies the kinetic energy term in the above dynamics, for use in the next section: L[w] = Li ~wl + Li :~l Wi, 8L/8wi(t) == 0 ~ Wi Wi = -8E/ 8wi which is supposed to be identical to be arranged by choosing w: Ui + 8E/8wi = 0, I.e. = -8E/8vi, = g(Ui) (c.f. Vi (13) (12)). This can (14) i.e. Wi 3 = JUI du.jg'(u) and Vi = JWi dw.jg'(u(w)). (15) APPLICATION TO COMPUTATIONAL ATTENTION We can introduce a computational "attention mechanism" for neural nets as follows. Suppose we can only afford to simulate A out of N ~ A neurons at a time in a large net. We shall do this by simulating A real neurons indexed by a E {I ... A}, corresponding to a dynamically chosen subset of the N virtual neurons indexed by i E {l. .. N}. 3.0.1 Constraints In great generality, the correspondance can be chosen dynamically via a sparse matrix of control parameters qia = ria E [0,1] L:i ria 1, La ria < 1. = constrained so that (16) Alternatively, the r variables can be coordinated to describe a "window" or "focus" of attention by taking ria to be a function of a small number of parameters q specifying the window, which are adjusted to optimize E[r[q]]. This procedure, which can result in significant economies, was used for our computer experiments. 3.0.2 Neuron Dynamics The assumed control relationship is Wi = Lriaka, (17) a i.e. virtual neuron Wi follows the real neuron to which r assigns it. Equation (15) then determines Ui(t) and viet). A plausible kinetic energy term for k is the same A Lagrangian Approach to Fixed Points as for w (c.f. equation (13?, since that choice (equivalent to the Hoplield case) has a good eigenvalue ratio for the u variables. The Lagrangian for the real neurons becomes . 1 ~?2 ~ 8E . (18) L[k] = - L.Jka + L.J -8. riaka 2 a . WI la and the equations of motion (greedy variation) may be shown to be ka = L riavg'(U(w,? , 3.1 [I: 71jvj + h, - (19) u,]. j CONTROL DYNAMICS FOR ATTENTION Now we need dynamics for the control parameters r or more generally q. An objective function transformation (proposed and subjected to preliminary experiments in [Mjolsness, 1987]) can be used to construct a new objective for the control parameters, q, which rewards speedy convergence of the original objective E as a function of the original variables v by measuring dE/dt: E[v] -+ E[q] = b(dE/dt) + Ecos t [q] b[2:i(8E/8v,)tid + Ecost [q], (20) where b is a monotonic, odd function that can be used to limit the range of E. We can calculate dE/dt from equations (17) and (19): Eb~eftt(r) =6(:~) = 6 [f.><. :! k.] = -6 [~ (~>'.Vg,(U') ;~ y] , (21) where 8E/8vi = 2:j 71jvj + hi - Ui. If we assume that Ecos t favors fixed points for which ria ~ 0 or 1 and 2:i ria ~ 0 or 1, there is a fixed-point-preserving transformation of (21) to Eb~eftt(r) = -6 [~r,.9'( U;)(;:')2] . This is monotonic in a linear function of r. It remains to specify energy term [(. 3.2 (22) Ecos t and a kinetic INDEPENDENT VIRTUAL NEURONS First consider independent ria. As in the Tank-Hopfield [Tank and Hopfield, 1986] linear programming net, we could take Thus the r dynamics just sorts the virtual neurons and chooses the A neurons with largest g' (ui)8 E / 8v, . For dynamics, we introduce a new time variable T that 81 82 Mjolsness and Miranker may not even be proportional to t (see figure 1b) and imitate the Lagrangians for Hopfield dynamics: L " 1 (d Pi a ) = '~ 2 dr 2 , 9 (Pi) d (_ + dr Ebeneflt ~) + ECOBt (24) ; sa 3.3 JUMPING WINDOW OF ATTENTION A far more cost-effective net involves partitioning the virtual neurons into real-netsized blocks indexed by a, so i -+ (a, a) where a indexes neurons within a block. Let XQ E [0,1] indicate which block is the current window or focus of attention, i.e. (26) Using (22), this implies Ebeneflt[x] = -b [Z:XQ Q Z:g'(UQa)(8~E )2] , a (27) Qa and (28) Since ECOBt here favors LQ XQ = 1 and XQ E {O, I}, points as, and can be replaced by, Ebeneflt has the same fixed (29) Then the dynamics for X is just that of a winner-take-all neural net among the blocks which will select the largest value of b[La g'(uQa )(8E/8vQa)2]. The simulations of Section 4 report on an earlier version of this control scheme, which selected instead the block with the largest value of La 18E/8vQa l. 3.4 ROLLING WINDOW OF ATTENTION Here the r variables for a neural net embedded in a d-dimensional space are determined by a vector x representing the geometric position of the window. ECOBt can be dropped entirely, and E can be calculated from r(x). Suppose the embedding is via a d-dimensional grid which for notational purposes is partitioned into window-sized squares indexed by integer-valued vectors 0: and a. Then (30) where 8w(x) ----'--'- = 8x~ + L)2] - L)2 - 1/4] 0 {6[1/4 - (xp 6[(x~ if if otherwise -1/2 -1/2 $xp+L< $ x~ - L < 1/2 1/2 (31) A Lagrangian Approach to Fixed Points and E[x] = -b [z: w(Lo: + a - o:a X)g'(uo:a)(8~Eo:a )2] . (32) The advantage of (30) over, for example, a jumping or sliding window of attention is that only a small number of real neurons are being reassigned to new virtual neurons at anyone time. 3.4.1 Dynamics of a Rolling Window A candidate Lagrangian is L[x] = ! '" (dXp.) 2 + '" 8E dxp. , 2 L...J P. dT (33) L...J 8x P. dT p. whence greedy variation hS/hz = 0 yields dX JJ dT =_ [2: o:a 8w(x - Lo: - a) g'(Uo:a)( 8E OX JJ OVo:a )2] Xb' [2: o:a wg'(Uo:a)( 8E 8 v o:a )2] (34) We may also calculate that the linearized dynamic's eigenvalues can be bounded away from infinity and zero. 4 SIMULATIONS A jumping window of attention was simulated for a graph-matching network in which the matching neurons were partitioned into groups, only one of which was active (ria = 1) at any given time. The resulting optimization method produced solutions of similar quality as the original neural network, but had a smaller requirement for computational space resources at any given time. Acknowledgement: Charles Garrett performed the computer simulations. References [Mjolsness, 1987] Mjolsness, E. (1987) . Control of attention in neural networks. In Proc. of First International Conference on Neural Networks, volume vol. II, pages 567-574. IEEE. [Mjolsness and Garrett, 1989] Mjolsness, E. and Garrett, C. (1989). Algebraic transformations of objective functions. Technical Report YALEU/DCS/RR686, Yale University Computer Science Department. Also, in press for Neural Networks. [Tank and Hopfield, 1986] Tank, D. W. and Hopfield, J. J. (1986). Simple 'neural' optimization networks: An aid converter, signal decision circuit, and a linear programming circuit. IEEE Transactions on Circuits and Systems, CAS-33 . 83
398 |@word h:1 effect:1 version:1 involves:1 indicate:1 implies:1 objective:9 simulation:4 linearized:1 usual:1 during:1 virtual:6 exchange:1 yorktown:1 oc:1 yaleu:1 criterion:1 simulated:1 lagrangians:2 preliminary:1 demonstrate:1 adjusted:1 motion:1 pl:1 current:2 ka:1 index:1 relationship:1 ratio:2 novel:2 dx:1 must:1 great:1 charles:1 subsequent:1 circuitry:1 vlr:1 functional:4 physical:1 smallest:3 winner:1 designed:1 purpose:1 volume:1 analog:1 proc:1 implementation:1 greedy:6 selected:1 design:1 imitate:1 significant:1 extremal:1 ria:8 beginning:1 largest:6 steepest:2 wl:1 jwi:1 neuron:16 grid:1 finite:1 descent:3 optional:1 jg:2 joo:1 had:1 rather:1 dc:1 station:1 height:1 longer:1 vals:1 add:1 focus:2 optimizing:2 notational:1 optimized:2 introduce:3 certain:1 watson:1 behavior:1 sense:1 whence:1 mechanic:1 economy:1 qa:1 conserved:1 preserving:1 dynamical:2 usually:2 entire:1 eo:1 window:10 considering:1 qia:1 becomes:1 converge:1 focussing:1 underlying:1 bounded:4 circuit:6 tank:4 among:1 ud:1 signal:2 ii:1 sliding:1 technical:1 suitable:1 prohibits:1 constrained:2 calculation:1 circumvent:1 representing:1 construct:1 transformation:4 scheme:1 temporal:1 identical:1 variant:1 future:1 report:2 control:9 partitioning:1 uo:3 haven:1 xq:4 geometric:1 acknowledgement:1 preserve:1 positive:1 tid:1 dropped:1 separately:1 embedded:1 limit:1 replaced:1 willard:1 limitation:1 extra:1 proportional:1 vg:1 hz:1 eb:2 dissipative:3 dynamically:2 specifying:1 xp:2 integer:1 ovo:1 pi:2 range:1 ibm:1 lo:2 xb:1 grossberg:2 practical:1 course:1 integral:1 converter:1 block:5 implement:1 simplifies:2 jumping:3 viet:1 procedure:1 indexed:4 taking:1 sparse:1 calculated:1 matching:3 minimal:1 earlier:1 jui:1 algebraic:2 cannot:1 measuring:1 afford:1 jj:2 action:1 far:1 cost:1 generally:1 transaction:1 subset:1 optimize:2 equivalent:1 rolling:2 lagrangian:14 center:1 detailed:1 vqa:2 preferred:1 attention:13 locally:1 active:1 assumed:1 assigns:1 alternatively:1 chooses:1 international:1 dw:1 mj:1 embedding:1 physic:2 transfer:1 variation:2 shall:1 vol:1 reassigned:1 ca:1 group:1 suppose:2 du:1 programming:2 domain:1 velocity:1 dr:2 main:1 derivative:4 graph:2 li:2 de:4 nonoverlapping:2 calculate:2 coordinated:1 ny:1 aid:1 mjolsness:9 vi:31 movement:1 performed:1 decision:1 position:1 lq:1 candidate:1 portion:1 ui:13 recover:2 reward:1 sort:1 entirely:1 ct:1 hi:2 dynamic:24 yale:3 jvj:2 correspondance:1 square:1 nontrivial:1 constraint:2 infinity:1 extremized:2 eric:1 yield:1 basis:1 ecos:3 derives:1 generalize:1 hopfield:7 simulate:1 anyone:1 importance:1 produced:1 derivation:1 trajectory:3 total:1 conjecture:1 describe:2 effective:1 department:2 nonstandard:1 rg:1 choosing:2 smaller:1 plausible:1 valued:1 energy:4 wi:10 otherwise:1 wg:1 partitioned:2 ability:1 favor:2 associated:1 monotonic:2 transform:1 dxp:2 ldt:1 determines:2 advantage:1 eigenvalue:7 equation:6 net:10 resource:1 remains:1 mechanism:3 kinetic:3 sized:1 garrett:4 subjected:1 unusual:1 change:2 dt:9 determined:1 specify:1 apply:1 miranker:4 supposed:1 formulation:3 arranged:1 box:1 ox:1 generality:1 just:2 away:2 enforce:1 simulating:1 clock:1 convergence:3 la:4 requirement:1 select:1 original:4 replacing:1 ei:2 running:1 speedy:1 derive:4 linearize:1 quality:1 ij:1 odd:1 sa:1 p2:2
3,290
3,980
Short-term memory in neuronal networks through dynamical compressed sensing Surya Ganguli Sloan-Swartz Center for Theoretical Neurobiology, UCSF, San Francisco, CA 94143 [email protected] Haim Sompolinsky Interdisciplinary Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel and Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA [email protected] Abstract Recent proposals suggest that large, generic neuronal networks could store memory traces of past input sequences in their instantaneous state. Such a proposal raises important theoretical questions about the duration of these memory traces and their dependence on network size, connectivity and signal statistics. Prior work, in the case of gaussian input sequences and linear neuronal networks, shows that the duration of memory traces in a network cannot exceed the number of neurons (in units of the neuronal time constant), and that no network can out-perform an equivalent feedforward network. However a more ethologically relevant scenario is that of sparse input sequences. In this scenario, we show how linear neural networks can essentially perform compressed sensing (CS) of past inputs, thereby attaining a memory capacity that exceeds the number of neurons. This enhanced capacity is achieved by a class of ?orthogonal? recurrent networks and not by feedforward networks or generic recurrent networks. We exploit techniques from the statistical physics of disordered systems to analytically compute the decay of memory traces in such networks as a function of network size, signal sparsity and integration time. Alternately, viewed purely from the perspective of CS, this work introduces a new ensemble of measurement matrices derived from dynamical systems, and provides a theoretical analysis of their asymptotic performance. 1 Introduction How neuronal networks can store a memory trace for recent sequences of stimuli is a central question in theoretical neuroscience. The influential idea of attractor dynamics [1], suggests how single stimuli can be stored as stable patterns of activity, or fixed point attractors, in the dynamics of recurrent networks. But, such simple fixed points are incapable of storing sequences. More recent proposals [2, 3, 4] suggest that recurrent networks could store temporal sequences of inputs in their ongoing, transient activity, even if they do not have nontrivial fixed points. In principle, past inputs could be read out from the instantaneous activity of the network. However, the theoretical principles underlying the ability of recurrent networks to store temporal sequences in their transient dynamics are poorly understood. For example, how long can memory traces last in such networks, and how does memory capacity depend on parameters like network size, connectivity, or input statistics? Several recent theoretical studies have made progress on these issues in the case of linear neuronal networks and gaussian input statistics. Even in this simple setting, the relationship between the memory properties of a neural network and its connectivity is nonlinear, and so understanding this 1 relationship poses an interesting challenge. Jaeger [4] proved a rigorous sum-rule (reviewed in more detail below) which showed that even in the absence of noise, no recurrent network can remember inputs for an amount of time that exceeds the number of neurons (in units of the neuronal time constant) in the network. White et al. [5] showed that in the presence of noise, a special class of ?orthogonal? networks, but not generic recurrent networks, could have memory that scales with network size. And finally, Ganguli et. al. [6] used the theory of Fisher information to show that the memory of a recurrent network cannot exceed that of an equivalent feedforward network, at least for times up to the network size, in units of the neuronal time constant. A key reason theoretical progress was possible in these works was that even though the optimal estimate of past inputs was a nonlinear function of the network connectivity, it was still a linear function of the current network state, due to the gaussianity of the signal (and possible noise) and the linearity of the dynamics. It is not clear for example, how these results would generalize to nongaussian signals, whose reconstruction from the current network state would require nonlinear operations. Here we report theoretical progress on understanding the memory capacity of linear recurrent networks for an important class of nongaussian signals, namely sparse signals. Indeed a wide variety of temporal signals of interest are sparse in some basis, for example human speech in a wavelet basis. We use ideas from compressed sensing (CS) to define memory curves which capture the decay of memory traces in neural networks for sparse signals, and provide methods to compute these curves analytically. We find strikingly different properties of memory curves in the sparse setting compared to the gaussian setting. Although motivated by the problem of memory, we also contribute new results to the field of CS itself, by introducing and analyzing new classes of CS measurement matrices derived from dynamical systems. Our main results are summarized in the discussion section. In the next section, we begin by reviewing more quantitatively the problem of short-term memory in neuronal networks, compressed sensing, and the relation between the two. 2 Short-term memory as dynamical compressed sensing. Consider a discrete time network dynamics given by x(n) = Wx(n ? 1) + vs0 (n). (1) Here a scalar, time dependent signal s0 (n) drives a recurrent network of N neurons. x(n) ? RN is the network state at time n, W is an N ? N recurrent connectivity matrix, and v is a vector of feedforward connections from the signal into the network. We choose v to have norm 1, and we demand that the dynamics be stable so that if ? is the squared magnitude of the largest eigenvalue of W, then ? < 1. If we think of the signal history {s0 (n ? k)|k ? 0} as an infinite dimensional temporal vector s0 whose k?th component s0k is s(n ? k), then the current network state x is linearly related to s through the effective N by ? measurement matrix A, i.e. x = As0 , where the matrix elements A?k = (Wk v)? , ? = 1, . . . , N, k = 0, . . . , ? (2) reflect the effect of an input k timesteps in the past on the activity of neuron ?. The extent to which the dynamical system in (1) can remember the past can then be quantified by how well one can recover s0 from x [4, 5, 6]. In the case where the signal has zero mean gaussian statistics with covariance hs0k s0l i = ?k,l , the optimal, minimum mean squared error estimate ?s of the signal history is given by ?s = AT (AAT )?1 x. The correlation between the estimate ?sk and the true signal s0k , averaged over the gaussian statistics of s0 , then defines a memory curve M (k) = h?sk s0k is0 , whose decay as k increases quantifies P? the decay of memory for past inputs in (1). Jaeger proved an important sum-rule for M (k): k=0 M (k) = N for any recurrent connectivity W and feedforward connectivity v. Given that M (k) cannot exceed 1 for any k, an important consequence of this sumrule is that it is not possible to recover an input signal k timesteps into the past when k is much larger than N in the sense that ?sk will be at most weakly correlated with s0k . Generically, one may not hope to remember sequences lasting longer than N timesteps with only N neurons, but in the case of temporally sparse inputs, the field of compressed sensing (CS) suggests this may be possible. CS [7, 8] shows how to recover a sparse T dimensional signal s0 , in which only a fraction f of the elements are nonzero, from a set of N linear measurements x = As0 where A is an N by T measurement matrix with N < T . One approach to recovering an estimate ?s of s0 2 from x involves L1 minimization, ?s = arg mins T X |si | subject to x = As, (3) i=1 which finds the sparsest signal, as measured by smallest L1 norm, consistent with the measurement constraints. Much of the seminal work in CS [9, 10, 11] has focused on sufficient conditions on A such that (3) is guaranteed to perfectly recover the true signal, so that ?s = s0 . However, many large random measurement matrices A which violate sufficient conditions proven in the literature still nevertheless typically yield perfect signal recovery. Alternate work [12, 13, 14, 15] which analyzes the asymptotic performance of large random measurement matrices in which each matrix element is drawn i.i.d. from a gaussian distribution, has revealed a phase transition in performance as a function the signal sparsity f and the degree of subsampling ? = N/T . In the ?-f plane, there is a critical phase boundary ?c (f ) such that if ? > ?c (f ) then CS will typically yield perfect signal reconstruction, whereas if ? < ?c (f ), CS will yield errors. Motivated by the above work in CS, we propose here that a neural network, or more generally any dynamical system as in (1), could in principle perform compressed sensing of its past inputs, and that a long but sparse signal history s0 could potentially be recovered from the instantaneous network state x. We quantify the memory capabilities of a neural network for sparse signals, by assessing our ability to reconstruct the past signal using L1 minimization. Given a network state x arising from a signal history s0 through (1), we can obtain an estimate ?s of the past using (3), where the measurement matrix A is given by (2). We then define a memory curve E(k) = h(?sk ? s0k )2 is0 , (4) namely the average reconstruction error of a signal k timesteps in the past averaged over the statistics of s0 . The rise of this error as k increases captures the decay of memory traces in (1). The central goal of this paper is to obtain a deeper understanding of the memory properties of neural networks for sparse signals by studying the memory curve E(k) and especially its dependence on W. In particular, we are interested in classes of network connectivities W and input statistics for which E(k) can remain small even for k  N . Such networks can essentially perform compressed sensing of their past inputs. From the perspective of CS, measurement matrices A of the form in (2), henceforth referred to as dynamical CS matrices, possess several new features not considered in the existing CS literature, features which could pose severe challenges for a recurrent network W to achieve good CS performance. First, A is an N by ? matrix, and so from the perspective of the phase diagram for CS reviewed above, it is likely that A is in the error phase; thus perfect reconstruction of the true signal, even for recent inputs will not be possible. Second, because we demand stable dynamics in (1), the columns of A decay as k increases: ||Wk v||2 < ?k where again ? < 1 is the squared magnitude of the largest eigenvalue of W. Such decay can compound errors. Third, the different columns of A can be correlated; if one thinks of Wk v as the state of the network k timesteps after a single unit input pulse, it is clear that temporal correlations in the evolving network response to this pulse are equivalent to correlations in the columns of A in (2). Such correlations could potentially adversely affect the performance of CS based on A, as well as complicate the theoretical analysis of CS performance. Nevertheless, despite all these seeming difficulties, in the following we show that a special class of network connectivities can indeed achieve good CS performance in which errors are controlled and memory traces can last longer than the number of neurons. 3 Memory in an Annealed Approximation to a Dynamical System In this section, we work towards an analytic understanding of the memory curve E(k) defined in (4). This curve depends on W, v and the statistics of s0 . We would like to understand its properties for ensembles of large random networks W, just as the asymptotic performance of CS was analyzed for large random measurement matrices A [12, 13, 14, 15]. However, in the dynamical setting, even if W is drawn from a simple random matrix ensemble, A in (2) will have correlations across its columns, making an analytical treatment of the memory curve difficult. Here we consider an ensemble of measurement matrices A which approximate dynamical CS matrices and can be 3 treated analytically. We consider matrices in which each element A?k is drawn i.i.d from a zero mean gaussian distribution with variance ?k . Since we are interested in memory that lasts O(N ) timesteps, we choose ? = e?1/? N , with ? O(1). This so called annealed approximation (AA) to a dynamical CS matrix captures two of the salient properties of dynamical CS matrices, their infinite temporal extent and the decay of successive columns, but neglects the analytically intractable correlations across columns. Such annealed CS matrices can be thought of as arising from ?imaginary? dynamical systems in which network activity patterns over time in response to a pulse decay, but are somehow temporally uncorrelated. ? can be thought of as the effective integration time of this dynamical system, in units of the number of neurons. Finally, to fully specify E(k), we must choose the statistics of s0 . We assume s0 has a probability f of being nonzero at any given time, and if nonzero, this nonzero value is drawn from a distribution P (s) which for now we take to be arbitrary. To theoretically compute the memory curve E(k), we define an energy function E(s) = T X ? T T u A Au + |si |, 2 i=1 (5) where u ? s ? s0 is the residual, and we consider the Gibbs distribution PG (s) = Z1 e??E(s) . We will later take ? ? ? so that the quadratic part of the energy function enforces the constraint As = As0 , and then take the low temperature ? ? ? limit so that PG (s) concentrates onto the global minimum of (3). In this limit, we can extract the memory curve E(k) as the average of (sk ?s0k )2 over PG and the statistics of s0 . Although PG depends on A, for large N , the properties of PG , including the memory curve E(k), do not depend on the detailed realization of A, but only on its statistics. Indeed we can compute all properties of PG for any typical realization of A by averaging over both A and s0 . This is done using the replica method [16] in our supplementary material. The replica method has been used recently in several works to analyze CS for the traditional case of uniform random gaussian measurement matrices [14, 17, 15]. We find that the statistics of each component sk in PG (s), conditioned on the true value s0k is well described by a mean field effective Hamiltonian  2 q ?? 0 MF k k s ? sk ? z Q0 /? Hk (s) = ? + ?|s|, (6) 2(1 + ???Q) where z is a random variable with a standard normal distribution. Thus the mean field approximation to the marginal distribution of a reconstruction component sk is Z 1 (7) PkM F (sk = s) = Dz M F exp(?HkM F (s)), Zk 1 2 where Dz = dz e? 2 z is a Gaussian measure. The order parameters Q0 and ?Q ? Q1 ? Q0 obey Q0 = ?Q = ? 1 X k 2 ? hh huiH M F ii k z N 1 N k=0 ? X (8) ?k hh h?u2 iH M F ii . k k=0 z (9) Here huiH M F and h?u2 iH M F are the mean and variance of the residual uk = sk ? s0k with rek k spect to a Gibbs distribution with Hamiltonian given by (6), and the double angular average hh ? iiz refers to integrating over the Gaussian distribution of z. Q1 and Q0 have simple P? interpretations in terms of the original Gibbs distribution PG defined above: Q1 = N1 k=1 ?k hu2k iPG and P? 2 Q0 = N1 k=1 ?k huk iPG , for typical realizations of A. Thus the order parameter equations (8)-(9) can be understood as self-consistency conditions for the definition of Q0 and ?Q in the mean field approximation to PG . In this approximation, the complicated constraints coupling sk for various k are replaced with a random gaussian force z in (6) which tends to prevent the marginal sk from assuming the true value s0k . This force is what remains of the measurement constraints after averaging over A, and its statistics are in turn a function of Q0 and Q1 , as determined by the replica method. Now to compute the memory curve E(k), we must take the limits ?, ?, N ? ? and complete the average over s0k . The ? ? ? limit can be taken immediately in (6) and ? disappears from the problem. Now as ? ? ?, self consistent solutions to (8) and (9) can be found when Q0 ? q0 and 4 ?Q ? ?q/?, where q0 and ?q are O(1). This limit is similar to that taken in a replica analysis of CS for random gaussian matrices in the error regime [15]. Taking this limit, (6) becomes    2 p 1 0 ?k q HkM F (s) = ? s ? s ? z ? + |s| . (10) 0 k 2??k ?q Since the entire Hamiltonian is proportional to ?, in the large ? limit, the statistics of sk are dominated by the global minimum of (10). In particular, we have p  (11) hsiH M F = ? s0k + z ??k q0 , ??k ?q , k where  1 (s ? x)2 + |s| = sgn(x)(|x| ? ?)+ , (12) 2 ? is a soft thresholding function which also arises in message passing approaches [18] to solving the CS problem in (3), and (y)+ = y if y > 0 and is otherwise 0. The optimization in (12) can be understood intuitively as follows: suppose one measures a scalar value x which is a true signal 0 s0 corrupted by additive gaussian noise with variance ?. Under a Laplace prior e?|s | on the true 0 signal, ?(x, ?) is simply the MAP estimate of s given the data x, which basically chooses the estimate s = 0 unless the data exceeds the noise level ?. Thus we see that in (10), ??k ?q plays the role of an effective noise level which increases with time k. Also, the variance of s at large ? is p  1 h(?s)2 iH M F = ? s0k + z ??k q0 , ??k ?q , (13) k ?  ?(x, ?) = arg mins where ?(x, ?) = ? ?(|x| ? ?), (14) and ?(x) is a step function at 0. Inserting (11) and (13) and the ansatz ?Q ? ?q/? into (8) and (9) then removes ? from the problem. But before making these substitutions, we first take N ? ? at fixed k/N , ?k ? e?t/? , P? ? and Rf?of O(1) by taking a continuum approximation for time, t = 1 0 ? dt. Moreover, we average over the true signal history s , k so that (8) and (9) k=0 N 0 become, Z ? p 2 (15) q0 = dt e?t/? ?(s0 + z et/? q0 , et/? ?q) ? s0 z,s0 Z0 ? p ?q = (16) dt e?t/? ?(s0 + z et/? q0 , et/? ?q) z,s0 , 0 where the double angular an integral over of z and the full average0 reflects R the gaussian distribution R distribution of s0 , i.e. F (z, s ) z,s0 ? (1 ? f ) Dz F (z, 0) + f Dz ds0 P (s0 )F (z, s0 ). Finally the memory curve E(t) is simply the continuum limit of the averaged squared residual 2 hh huiH M F iiz,s0 , and is given by k E(t) = ?(s0 + z p et/? q0 , et/? ?q) ? s0 2 z,s0 . (17) Equations (15),(16), and (17) now depend only on ? , f and P (s0 ), and their theoretical predictions can now be compared with numerical experiments. In this work we focus on a simple class of plusminus (PM) signals in which P (s0 ) = 1/2 ?(s0 ? 1) + 1/2 ?(s0 + 1). Fig. 1A shows an example of a PM signal s0 with f = 0.01, while Fig. 1B shows an example of a reconstruction of ?s using L1 minimization in (3) where the data x used in (3) was obtained from s0 using a random annealed measurement matrix with ? = 1. Clearly there are errors in the reconstruction, but remarkably, despite the decay in the columns of A, the reconstruction is well correlated with the true signal for a time up to 4 times the number of measurements. We can derive theoretical memory curves for any given f and ? by numerically solving for q0 and ?q in (15),(16), and inserting the results into (17). Examples of the agreement between theory and simulations are shown in Fig. 1C-E. As t ? ?, L1 minimization always yields a zero signal estimate, so the memory curve asymptotically approaches f for large t. A convenient measure of memory capacity is the time T1/2 at which the memory curve reaches half its asymptotic error value, i.e. E(T1/2 ) = f /2. A principle feature 5 D ?1 0 2 4 6 8 0.5 0 10 2 4 6 t 0 4 6 t 0 10 8 10 10 8 6 4 2 0 0 G 1 2 2 4 6 8 1 0.5 0 10 t T1/2 1/2 1 T Estimate F 2 0.5 t B ?1 0 8 E 1 3 4 5 ? 10 8 6 4 2 0 0 0.05 f 2 4 6 8 10 t H E(0) / f E(t) / f s0 0 1 E(t) / f C 1 E(t) / f A 0.1 1 0.5 0 0 0.025 f 0.05 Figure 1: Memory in the annealed approximation. (A) A PM signal s0 with f = 0.01 that lasts T = 10N timesteps where N = 500. (B) A reconstruction of s0 from the output of an annealed measurement matrix with N = 500, ? = 1. (C,D,E) Example memory curves for f = 0.01, and ? = 1 (C), 2 (D), 3 (E). (F) T1/2 as a function of ? . The 4 curves from top to bottom are for f = 0.01, 0.02, 0.03, 0.04. (G) T1/2 optimized over ? for each f . (H) The initial error as a function of f . The 3 curves from bottom to top are for ? = 1, 2, 3. For (C-H), red curves are theoretical predictions while blue curves and points are from numerical simulations of L1 minimization with N = 100 averaged over 300 trials. The width of the blue curves reflects standard error. of this family of memory curves is that for any given f there is an optimal ? which maximizes T1/2 (Fig. 1F) . The presence of this optimum arises due to a competition between decay and interference. If ? is too small, signal measurements decay too quickly, thereby preventing large memory capacity. However, if ? is too large, signals from the distant past do not decay away, thereby interfering with the measurements of more recent signals, and again degrading memory. As f decreases, long time signal interference is reduced, thereby allowing larger values of ? to be chosen without degrading memory for more recent signals. For any given f , we can compute T1/2 (f ) optimized over ? (Fig. 1G). This memory capacity, again measured in units of the number of neurons, already exceeds 1 at modest values of f = 0.1, and diverges as f ? 0, as does the optimal value of ? . By analyzing (15) and (16) in the limit f ? 0 and ? ? ?, we find that ?q is O(1) while q0 ? 0. Furthermore, as f ? 0, the optimal T1/2 is O( f log1 1/f ). The smallest error occurs at t = 0 and it is natural to ask how this error E(0) behaves as a function of f for small f to see how well the most recent input can be reconstructed in the limit of sparse signals. We analyze (15) and (16) in the limit f ? 0 and ? of O(1), and find that E(0) is O(f 2 ) as confirmed in Fig. 1F. Furthermore, E(0) monotonically increases with ? for fixed f as more signals from the past interfere with the most recent input. 4 Orthogonal Dynamical Systems We have seen in the previous section that annealed CS matrices have remarkable memory properties, but our main interest was to exhibit a dynamical CS matrix as in (2) capable of good compressed sensing, and therefore short-term ? memory, performance. Here we show that a special class of network connectivity in which W = ?O where O is any orthogonal matrix, and v is any random unit norm vector possesses memory properties remarkably close to that of the annealed matrix ensemble. Fig. 2A-F presents results identical to that of Fig. 1C-H except for the crucial change that all simulation results in Fig. 2 were obtained using dynamical CS matrices of the form A?k = (?k/2 Ok v)? , rather than annealed CS matrices. All red curves in Fig. 2A-F are identical to those in Fig. 1 and reflect the theory of annealed CS matrices derived in the previous section. For small ? , we see small discrepancies between memory curves for orthogonal neural networks and the annealed theory (Fig. 2A-B), but as ? increases, this discrepancy decreases (Fig. 2C). In particular, from the perspective of the optimal T1/2 for which larger ? is relevant, we see a remarkable match between the optimal memory capacity of orthogonal neural networks and that predicted by the annealed theory (see Fig. 2E). And there is good match in the initial error even at small ? (Fig. 2F). 6 2 4 6 8 0.5 0 10 t 10 8 6 4 2 0 0 1 2 2 4 6 8 10 12 10 3 ? 4 5 10 8 6 4 2 0 0 0.05 f 2 4 6 8 10 t F 1 8 6 4 0.5 0 0 0.1 14 0.5 t E T1/2 T1/2 D G 1 0 E(0) / f 0 E(t) / f 0.5 C 1 Max Corr B 1 E(t) / f E(t) / f A 2 0.025 f 0.05 0 0 0.05 f 0.1 Figure 2: Memory in orthogonal neuronal networks. Panels (A-F) are identical to panels (C-H) in Fig. 1 except now the blue curves and points are obtained from simulations of L1 minimization using measurement matrices derived from an orthogonal neuronal network. (G) The mean and standard deviation of ?f for 5 annealed (red) and 5 orthogonal matrices (blue) with N=200 and T=3000. The key difference between the annealed and the dynamical CS matrices is that the former neglects correlations across columns that can arise in the latter. How strong are these correlations for the case of orthogonal matrices? Motivated by the restricted isometry property [11], we consider the following probe of the strength of correlations across columns of A. Consider an N by f T matrix B obtained by randomly subsampling the columns of an N by T measurement matrix A. Let ?f be the maximal eigenvalue of the matrix BT B of inner products of columns of B. ?f is a measure of the strength of correlations across the f T sampled columns of A. We can estimate the mean and standard deviation of ?f due to the random choice of f T columns of A and plot the results as function of f . To separate the issue of correlations from decay, we do this analysis for ? = 1 and finite T (similar results are obtained for large T and ? < 1). Results are shown in Fig 2 for 5 instances of annealed (red) and dynamical (blue) CS matrices. We see strikingly different behavior in the two ensembles. Correlations are much stronger in the dynamical ensemble, and fluctuate from instance to instance, while they are weaker in the annealed ensemble, and do not fluctuate (the 5 red curves are on top of each other). Given the very different statistical properties of the two ensembles, the level of agreement between the simulated memory properties of orthogonal neural networks, and the theory of annealed CS matrices is remarkable. Why do orthogonal neural networks perform so well, and can more generic networks have similar performance? The key to understanding the memory, and CS, capabilities of orthogonal neural ? networks lies in the eigenvalue spectrum of an orthogonal matrix. The eigenvalues of W = ?O, ? when O is a large random orthogonal matrix, are uniformly distributed on a circle of radius ? k in the complex plane. Thus when ? = e?1/? N , the sequence of vectors W v explore the full N dimensional space of network activity patterns for O(? N ) time steps before decaying away. In contrast, a generic random Gaussian matrix W with elements drawn i.i.d from a zero ? mean gaussian with variance ?/N has eigenvalues uniformly distributed on a solid disk of radius ? in the complex plane. Thus the sequence of vectors Wk v no longer explore a high dimensional space of activity patterns; components of v in the direction of eigenmodes of W with small eigenvalues will rapidly decay away, and so the sequence will rapidly become confined to a low dimensional space. Good compressed sensing matrices often have columns that are random and uncorrelated. From the above considerations, it is clear that dynamical CS matrices derived from orthogonal neural networks can come close to this ideal, while those derived from generic gaussian networks cannot. 5 Discussion In this work we have made progress on the theory of short-term memory for nongaussian, sparse, temporal sequences stored in the transient dynamics of neuronal networks. We used the framework of compressed sensing, specifically L1 minimization, to reconstruct the history of the past input signal from the current network activity state. The reconstruction error as a function of time into the past then yields a well-defined memory curve that reflects the memory capabilities of the network. We studied the properties of this memory curve and its dependence on network connectivity, and found 7 results that were qualitatively different from prior theoretical studies devoted to short-term memory in the setting of gaussian input statistics. In particular we found that orthogonal neural networks, but importantly, not generic random gaussian networks, are capable of remembering inputs for a time that exceeds the number of neurons in the network, thereby circumventing a theorem proven in [4], which limits the memory capacity of any network to be less than the number of neurons in the gaussian signal setting. Also, recurrent connectivity plays an essential role in allowing a network to have a memory capacity that exceeds the number of neurons. Thus purely feedforward networks, which always outperform recurrent networks (for times less than the network size) in the scenario of gaussian signals and noise [6] are no longer optimal for sparse input statistics. Finally, we exploited powerful tools from statistical mechanics to analytically compute memory curves as a function of signal sparsity and network integration time. Our theoretically computed curves matched reasonably well simulations of orthogonal neural networks. To our knowledge, these results represent the first theoretical calculations of short-term memory curves for sparse signals in neuronal networks. We emphasize that we are not suggesting that biological neural systems use L1 minimization to reconstruct past inputs. Instead we use L1 minimization in this work simply as a theoretical tool to probe the memory capabilities of neural networks. However, neural implementations of L1 minimization exist [19, 20], so if stimulus reconstruction were the goal of a neural system, reconstruction performance similar to what is reported here could be obtained in a neurally plausible manner. Also, we found that orthogonal neural networks, because of their eigenvalue spectrum, display remarkable memory properties, similar to that of an annealed approximation. Such special connectivity is essential for memory performance, as random gaussian networks cannot have memory similar to the annealed approximation. Orthogonal connectivity could be implemented in a biologically plausible manner using antisymmetric networks with inhibition operating in continuous time. When exponentiated, such connectivities yield the orthogonal networks considered here in discrete time. Our results are relevant not only to the field of short-term memory, but also to the field of compressed sensing (CS). We have introduced two new ensembles of random CS measurement matrices. The first of these, dynamical CS matrices, are the effective measurements a dynamical system makes on a continuous temporal stream of input. Dynamical CS matrices have three properties not considered in the existing CS literature: they are infinite in temporal extent, have columns that decay over time and exhibit correlations between columns. We also introduce annealed CS matrices, that are also infinite in extent and have decaying columns, but no correlations across columns. We show how to analytically calculate the time course of reconstruction error in the annealed ensemble and compare it to the dynamical ensemble for orthogonal dynamical systems. Our results show that orthogonal dynamical systems can perform CS even while operating with errors. This work suggests several extensions. Given the importance of signal statistics in determining memory capacity, it would be interesting to study memory for sparse nonnegative signals. The inequality constraints on the space of allowed signals arising from nonnegativity can have important effects in CS; they shift the phase boundary between perfect and error-prone reconstruction [12, 13, 15], and they allow the existence of a new phase in which signal reconstruction is possible even without L1 minimization [15]. We have found, through simulations, dramatic improvements in memory capacity in this case, and are extending the theory to explain these effects. Also, we have used a simple model for sparseness, in which a fraction of signal elements are nonzero. But our theory is general for any signal distribution, and could be used to analyze other models of sparsity, i.e. signals drawn from Lp priors. Also, we have worked in the high SNR limit. However our theory can be extended to analyze memory in the presence of noise by working at finite ?. But most importantly, a deeper understanding of the relationship between dynamical CS matrices and their annealed counterparts would desirable. The effects of temporal correlations in the network activity patterns of orthogonal dynamical systems is central to this problem. For example, we have seen that these temporal correlations introduce strong correlations between the columns of the corresponding dynamical CS matrix (Fig. 2G), yet the memory properties of these matrices agree well with our annealed theory (Fig. 2E-F), which neglects these correlations. We leave this observation as an intriguing puzzle for the fields of short-term memory, dynamical systems, and compressed sensing. Acknowledgments S. G. and H. S. thank the Swartz Foundation, Burroughs Wellcome Fund, and the Israeli Science Foundation for support, and Daniel Lee for useful discussions. 8 References [1] J.J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. PNAS, 79(8):2554, 1982. [2] W. Maass, T. Natschlager, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural computation, 14(11):2531? 2560, 2002. [3] H. Jaeger and H. Haas. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 304(5667):78, 2004. [4] H. Jaeger. Short term memory in echo state networks. GMD Report 152 German National Research Center for Information Technology, 2001. [5] O.L. White, D.D. Lee, and H. Sompolinsky. Short-term memory in orthogonal neural networks. Phys. Rev. Lett., 92(14):148102, 2004. [6] S. Ganguli, D. Huh, and H. Sompolinsky. Memory traces in dynamical systems. Proc. Natl. Acad. Sci., 105(48):18970, 2008. [7] A.M. Bruckstein, D.L. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. Siam Review, 51(1):34?81, 2009. [8] E. Candes and M. Wakin. An introduction to compressive sampling. IEEE Sig. Proc. Mag., 25(2):21?30, 2008. [9] D.L. Donoho and M. Elad. Optimally sparse representation in general (non-orthogonal) dictionaries via l1 minimization. PNAS, 100:2197?2202, 2003. [10] E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory, 52(2):489?509, 2006. [11] E. Candes and T. Tao. Decoding by linear programming. IEEE Trans. Inf. Theory, 51:4203? 4215, 2005. [12] D.L. Donoho and J. Tanner. Sparse nonnegative solution of underdetermined linear equations by linear programming. PNAS, 102:9446?51, 2005. [13] D.L. Donoho and J. Tanner. Neighborliness of randomly projected simplices in high dimensions. PNAS, 102:9452?7, 2005. [14] Y. Kabashima, T. Wadayama, and T. Tanaka. A typical reconstruction limit for compressed sensing based on l p-norm minimization. J. Stat. Mech., page L09003, 2009. [15] S. Ganguli and H. Sompolinsky. Statistical mechanics of compressed sensing. Phys. Rev. Lett., 104(18):188701, 2010. [16] M. Mezard, G. Parisi, and M.A. Virasoro. Spin glass theory and beyond. World scientific Singapore, 1987. [17] S. Rangan, A.K. Fletcher, and Goyal V.K. Asymptotic analysis of map estimation via the replica method and applications to compressed sensing. CoRR, abs/0906.3234, 2009. [18] D.L. Donoho, A. Maleki, and A. Montanari. Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci., 106(45):18914, 2009. [19] Y. Xia and M.S. Kamel. A cooperative recurrent neural network for solving l 1 estimation problems with general linear constraints. Neural computation, 20(3):844?872, 2008. [20] C.J. Rozell, D.H. Johnson, R.G. Baraniuk, and B.A. Olshausen. Sparse coding via thresholding and local competition in neural circuits. Neural computation, 20(10):2526?2563, 2008. 9
3980 |@word trial:1 norm:4 stronger:1 disk:1 pulse:3 simulation:6 covariance:1 pg:9 q1:4 dramatic:1 thereby:5 solid:1 initial:2 substitution:1 phy:1 mag:1 daniel:1 past:18 existing:2 imaginary:1 current:4 recovered:1 si:2 yet:1 intriguing:1 must:2 additive:1 numerical:2 wx:1 distant:1 analytic:1 remove:1 plot:1 fund:1 half:1 plane:3 hamiltonian:3 short:11 provides:1 contribute:1 successive:1 vs0:1 become:2 introduce:2 manner:2 theoretically:2 indeed:3 behavior:1 mechanic:2 brain:1 becomes:1 begin:1 underlying:1 linearity:1 moreover:1 maximizes:1 panel:2 natschlager:1 israel:1 what:2 circuit:1 matched:1 degrading:2 compressive:1 temporal:11 remember:3 hkm:2 uk:1 unit:7 before:2 t1:11 understood:3 aat:1 local:1 tends:1 limit:14 consequence:1 acad:2 despite:2 analyzing:2 au:1 studied:1 quantified:1 suggests:3 averaged:4 acknowledgment:1 enforces:1 goyal:1 chaotic:1 mech:1 evolving:1 thought:2 convenient:1 integrating:1 refers:1 suggest:2 cannot:5 onto:1 close:2 romberg:1 seminal:1 equivalent:3 map:2 center:4 dz:5 annealed:23 jerusalem:1 duration:2 focused:1 recovery:1 immediately:1 rule:2 importantly:2 laplace:1 enhanced:1 suppose:1 play:2 exact:1 programming:2 sig:1 agreement:2 harvard:1 element:6 rozell:1 cooperative:1 bottom:2 role:2 capture:3 calculate:1 sompolinsky:4 wadayama:1 decrease:2 dynamic:8 raise:1 reviewing:1 depend:3 weakly:1 solving:3 purely:2 basis:2 strikingly:2 hopfield:1 emergent:1 various:1 effective:5 harnessing:1 whose:3 larger:3 supplementary:1 plausible:2 elad:2 is0:2 reconstruct:3 compressed:17 otherwise:1 ability:3 statistic:17 think:2 itself:1 echo:1 sequence:12 eigenvalue:8 parisi:1 analytical:1 reconstruction:17 propose:1 maximal:1 product:1 inserting:2 relevant:3 realization:3 rapidly:2 poorly:1 achieve:2 competition:2 double:2 optimum:1 assessing:1 jaeger:4 diverges:1 extending:1 perfect:4 leave:1 coupling:1 recurrent:16 ac:1 stat:1 pose:2 derive:1 measured:2 progress:4 strong:2 as0:3 recovering:1 c:47 involves:1 predicted:1 come:1 quantify:1 implemented:1 concentrate:1 direction:1 radius:2 disordered:1 human:1 transient:3 sgn:1 material:1 require:1 biological:1 underdetermined:1 extension:1 considered:3 normal:1 exp:1 fletcher:1 puzzle:1 continuum:2 dictionary:1 smallest:2 estimation:2 proc:3 largest:2 tool:2 reflects:3 hope:1 minimization:13 clearly:1 gaussian:22 always:2 rather:1 fluctuate:2 derived:6 focus:1 improvement:1 hk:1 contrast:1 rigorous:1 sense:1 glass:1 ganguli:4 dependent:1 typically:2 entire:1 bt:1 relation:1 interested:2 tao:2 issue:2 arg:2 integration:3 special:4 marginal:2 field:8 saving:1 sampling:1 identical:3 discrepancy:2 report:2 stimulus:3 quantitatively:1 randomly:2 national:1 replaced:1 phase:6 fiz:1 attractor:2 n1:2 ab:1 interest:2 message:2 highly:1 severe:1 introduces:1 generically:1 analyzed:1 natl:2 devoted:1 integral:1 capable:2 orthogonal:26 unless:1 modest:1 incomplete:1 circle:1 theoretical:15 virasoro:1 instance:3 column:19 soft:1 modeling:1 introducing:1 deviation:2 snr:1 uniform:1 johnson:1 too:3 optimally:1 stored:2 reported:1 corrupted:1 chooses:1 siam:1 huji:1 interdisciplinary:1 lee:2 physic:1 ansatz:1 decoding:1 tanner:2 quickly:1 nongaussian:3 connectivity:15 again:3 squared:4 central:3 reflect:2 choose:3 henceforth:1 adversely:1 suggesting:1 attaining:1 summarized:1 wk:4 gaussianity:1 seeming:1 coding:1 sloan:1 depends:2 stream:1 later:1 analyze:4 red:5 recover:4 decaying:2 capability:4 complicated:1 candes:3 il:1 spin:1 variance:5 ensemble:12 yield:6 generalize:1 basically:1 confirmed:1 drive:1 kabashima:1 history:6 explain:1 reach:1 phys:2 complicate:1 definition:1 energy:3 frequency:1 burroughs:1 sampled:1 proved:2 treatment:1 massachusetts:1 ask:1 knowledge:1 ok:1 dt:3 response:2 specify:1 done:1 though:1 furthermore:2 just:1 angular:2 correlation:18 working:1 nonlinear:3 somehow:1 interfere:1 defines:1 eigenmodes:1 scientific:1 olshausen:1 usa:1 effect:4 true:9 counterpart:1 former:1 analytically:6 maleki:1 read:1 q0:19 nonzero:5 maass:1 white:2 self:2 width:1 complete:1 l1:13 temperature:1 image:1 instantaneous:3 consideration:1 recently:1 behaves:1 physical:1 interpretation:1 numerically:1 measurement:23 cambridge:1 gibbs:3 consistency:1 pm:3 nonlinearity:1 stable:4 longer:4 operating:2 inhibition:1 isometry:1 recent:9 showed:2 perspective:4 inf:2 scenario:3 store:4 compound:1 incapable:1 inequality:1 exploited:1 seen:2 minimum:3 analyzes:1 remembering:1 swartz:2 monotonically:1 signal:57 ii:2 violate:1 full:2 neurally:1 desirable:1 pnas:4 exceeds:6 match:2 calculation:1 huh:1 long:3 controlled:1 prediction:2 essentially:2 represent:1 achieved:1 confined:1 proposal:3 whereas:1 remarkably:2 diagram:1 crucial:1 posse:2 subject:1 pkm:1 presence:3 ideal:1 exceed:3 feedforward:6 revealed:1 variety:1 affect:1 timesteps:7 perfectly:1 inner:1 idea:2 shift:1 l09003:1 motivated:3 speech:1 passing:2 generally:1 useful:1 clear:3 detailed:1 amount:1 gmd:1 reduced:1 outperform:1 exist:1 singapore:1 neuroscience:1 arising:3 blue:5 discrete:2 key:3 salient:1 nevertheless:2 drawn:6 prevent:1 replica:5 asymptotically:1 circumventing:1 fraction:2 sum:2 powerful:1 uncertainty:1 baraniuk:1 family:1 spect:1 haim:2 guaranteed:1 display:1 quadratic:1 nonnegative:2 activity:9 nontrivial:1 strength:2 constraint:6 worked:1 rangan:1 dominated:1 min:2 influential:1 alternate:1 remain:1 across:6 lp:1 rev:2 making:2 biologically:1 lasting:1 intuitively:1 restricted:1 interference:2 taken:2 wellcome:1 equation:4 agree:1 remains:1 turn:1 german:1 hh:4 studying:1 operation:1 probe:2 obey:1 away:3 generic:7 existence:1 original:1 top:3 subsampling:2 wakin:1 neglect:3 exploit:1 especially:1 question:2 already:1 occurs:1 dependence:3 traditional:1 exhibit:2 separate:1 thank:1 simulated:1 capacity:12 sci:2 haas:1 extent:4 reason:1 assuming:1 relationship:3 hebrew:1 difficult:1 potentially:2 trace:10 rise:1 implementation:1 collective:1 perform:6 allowing:2 ethologically:1 neuron:12 observation:1 finite:2 ipg:2 neurobiology:1 extended:1 communication:1 rn:1 perturbation:1 arbitrary:1 introduced:1 namely:2 connection:1 z1:1 optimized:2 ds0:1 tanaka:1 alternately:1 israeli:1 trans:2 beyond:1 dynamical:32 pattern:5 below:1 regime:1 sparsity:4 challenge:2 rf:1 including:1 memory:78 s0k:12 max:1 critical:1 difficulty:1 treated:1 force:2 natural:1 predicting:1 residual:3 technology:1 temporally:2 disappears:1 log1:1 extract:1 prior:4 understanding:6 literature:3 review:1 determining:1 asymptotic:5 fully:1 interesting:2 proportional:1 proven:2 remarkable:4 foundation:2 degree:1 sufficient:2 consistent:2 s0:40 principle:5 thresholding:2 storing:1 uncorrelated:2 interfering:1 prone:1 course:1 last:4 wireless:1 weaker:1 deeper:2 understand:1 exponentiated:1 wide:1 allow:1 taking:2 markram:1 sparse:20 distributed:2 curve:33 boundary:2 rek:1 transition:1 lett:2 dimension:1 world:1 preventing:1 xia:1 made:2 qualitatively:1 san:1 projected:1 reconstructed:1 approximate:1 emphasize:1 s0l:1 global:2 bruckstein:1 francisco:1 surya:2 spectrum:2 continuous:2 quantifies:1 sk:13 why:1 reviewed:2 zk:1 iiz:2 ca:1 reasonably:1 robust:1 huk:1 complex:2 antisymmetric:1 main:2 montanari:1 linearly:1 noise:8 arise:1 allowed:1 neuronal:13 fig:19 referred:1 simplices:1 mezard:1 nonnegativity:1 sparsest:1 lie:1 third:1 wavelet:1 z0:1 theorem:1 sensing:17 decay:16 intractable:1 essential:2 ih:3 corr:2 importance:1 magnitude:2 conditioned:1 sparseness:1 demand:2 mf:1 simply:3 likely:1 explore:2 scalar:2 u2:2 aa:1 viewed:1 goal:2 donoho:5 towards:1 absence:1 fisher:1 change:1 infinite:4 typical:3 determined:1 except:2 averaging:2 uniformly:2 specifically:1 called:1 support:1 latter:1 arises:2 ucsf:2 ongoing:1 correlated:3
3,291
3,981
Exact learning curves for Gaussian process regression on large random graphs Peter Sollich Department of Mathematics King?s College London London, WC2R 2LS, U.K. [email protected] Matthew J. Urry Department of Mathematics King?s College London London, WC2R 2LS, U.K. [email protected] Abstract We study learning curves for Gaussian process regression which characterise performance in terms of the Bayes error averaged over datasets of a given size. Whilst learning curves are in general very difficult to calculate we show that for discrete input domains, where similarity between input points is characterised in terms of a graph, accurate predictions can be obtained. These should in fact become exact for large graphs drawn from a broad range of random graph ensembles with arbitrary degree distributions where each input (node) is connected only to a finite number of others. Our approach is based on translating the appropriate belief propagation equations to the graph ensemble. We demonstrate the accuracy of the predictions for Poisson (Erdos-Renyi) and regular random graphs, and discuss when and why previous approximations of the learning curve fail. 1 Introduction Learning curves are a convenient way of characterising the performance that can be achieved with machine learning algorithms: they give the generalisation error  as a function of the number of training examples n, averaged over all datasets of size n under appropriate assumptions about the data-generating process. Such a characterization is particularly useful in the case of non-parametric approaches such as Gaussian processes (GPs) [1], where in contrast to the parametric case [2] there is no generic classification of possible learning curves. Here we study GP regression, where a real-valued output function f (x) is to be learned. Qualitatively, GP learning curves are relatively well understood for the scenario where the inputs x come from a continuous space, typically Rn [3, 4, 5, 6, 7, 8, 9, 10, 11]. However, except in the limit of large n, or for very specific situations like one-dimensional inputs [3], the learning curves cannot be calculated exactly. Here we show that this is possible for discrete input spaces where similarity between input points can be represented as a graph whose edges connect similar points, inspired by work at last year?s NIPS that developed simple approximations for this scenario [12]. There are many potential application domains where learning of such functions of discrete inputs x could be relevant, for example if x is a research paper whose impact f (x) we would like to predict; the similarity graph could then be constructed on the basis of shared authorship. Or we could be trying to learn functions on generic symbol strings x, for example ones characterizing protein amino acid sequences, and the similarity graph would have edges between homologous molecules. Our aim is to find out how well GP regression can perform in such discrete domains; alternative inference approaches including online algorithms [13, 14, 15, 16] would also be interesting to study but are outside the scope of the present paper. We focus on large sparse random graphs, where each node is connected only to a finite number of other nodes even though the overall number of nodes in the graph is large. 1 In section 2 we give a brief overview of GP regression and summarize the approximation for the learning curves used in previous work [4, 8, 12]. Section 3 then explains our method: following a similar approach in [17] for random matrix spectra, we write down the belief propagation equations for a given graph in the form normally used in the cavity method [18] of statistical mechanics, and then translate them to graphs drawn from a random graph ensemble. Because for sparse random graphs typical loop lengths grow with the graph size, the belief propagation equations and hence our learning curve predictions should become exact for large graphs. Section 4 compares the predictions with simulation results for Poisson (Erdos-Renyi) graphs, where each edge is independently present with some small probability, and random regular graphs, where each node has the same degree (number of neighbours). The new predictions are indeed very accurate, and substantially more so than previous approximations. In section 4.1 we discuss in more detail the relationship between our work and these approximations to rationalize where the strongest deviations occur. Finally, section 5 summarises our results and discusses open questions and directions for future work. 2 GP regression and approximate learning curves Gaussian processes have become a well known machine learning technique used in a wide range of areas, see e.g. [19, 20, 21]. One reason for their success is the intuitive way that a priori information about the function to be learned is transparently encoded by the covariance and mean functions of the GP. A GP is a Gaussian prior over functions f with a fixed covariance function (kernel) C and mean function (assumed to be 0)1 . In the simplest case the likelihood is also Gaussian, i.e. we assume that the outputs y? in a set of examples D = {(i1 , y1 , . . . , (iN , yN )} are obtained by corrupting the clean function values fi? with i.i.d. Gaussian noise of variance ? 2 . Then the posterior distribution over functions is, from Bayes? theorem P (f |D) ? P (f )P (D|f ): N 1 1 X P (f |D) ? exp(? f T C ?1 f ? 2 (y? ? fi? )2 ) 2 2? ?=1 (1) We consider GPs in discrete spaces, where each input is a node of a graph and can therefore be given a discrete label i as anticipated above; fi is the associated function value. If the graph has V nodes, the covariance function is then just a V ? V matrix. A number of possible forms for covariance functions on graphs have been proposed. We will focus on the relatively flexible random walk covariance function [22], 1 ((1 ? a?1 )I + a?1 D ?1/2 AD ?1/2 )p a ? 2, p ? 0 (2) ? Here A is the adjacency matrix of the graph, with Aij = 1 if nodes i and j are connected by an edge, and 0 otherwise; P D = diag{d1 , . . . , dV } is a diagonal matrix containing the degrees of the nodes in the graph (di = j Aij ). One can easily see the relationship to a random walk: the unnormalised covariance function is a (symmetrised) p-step ?lazy? random walk, with probability a?1 of moving to a neighbouring node at each step. The prior thus assumes that function values up to a distance p ?1 along the graph are correlated with each other, to an extent determined P by the hyperparameter a . The constant ? will be chosen throughout to normalise C so that V1 i Cii = 1, which corresponds to setting the average prior variance of the function values to unity. C= Our main concern in this paper are GP learning curves in discrete input spaces. The learning curve describes how the average generalisation error (mean square error)  decreases with the number of examples N . Qualitatively, it gives the rate at which one would expect a GP to learn a function in the average case. The generalisation error on an ensemble of graphs is given by 1 X ? =h (fi ? fi )2 if |D,D,graphs (3) V i 1 We focus on the zero prior mean case throughout. All results translate fairly straightforwardly to the non-zero mean case, but this complicates the algebra without leading to substantially new insights. 2 where f is the uncorrupted (clean) teacher or target function, and f? is the posterior mean function of the GP which gives the function values we predict on the basis of the data D. It is worth noting that the generalisation error for a graph ensemble contains an additional average over this ensemble. As is standard in the study of learning curves we have assumed a matched scenario where the posterior P (f |D) for our predictions is also the posterior over the underlying target functions. The generalisation error is then the Bayes error, and is given by the average posterior variance. Sollich [4] and later Opper [7] with a more general replica approach showed that for continuous input spaces a reasonable approximation to the learning curve could be expressed as the solution of the following self-consistent equation:   V X N ?1 =g , g(h) = (??1 (4) ? + h)  + ?2 ?=1 Here the ?? are appropriately defined eigenvalues of the covariance function. The motivation for our study is work presented at NIPS2009 [12], which demonstrated that this approximation can also be used in discrete domains, but is not always accurate. Studying random walk and diffusion kernels [22] on random regular graphs, the authors showed that although the eigenvalue-based approximation is reasonable for both the large and the small N limits, it fails to accurately predict the learning curve in the important transition region between these two extremes, drastically so for low noise variances ? 2 . In the next section we will show that this shortcoming can be overcome by the cavity method (belief propagation) which explicitly takes advantage of the sparse structure of the underlying graph. This will give an accurate approximation for the learning curves in a broad range of ensembles of sparse random graphs. 3 Accurate predictions with the cavity method The cavity method was developed in statistical physics [18] but is closely related to belief propagation; for a good overview of these and other mean field methods, see e.g. [23]. We begin with equation (3). Because we only need the posterior variance in the matched case considered here, we can shift f so that f? = 0; fi is then the deviation of the function value at node i from the posterior mean. In this notation, the Bayes error is Z 1 X =h df fi2 P (f |D)iD,graphs (5) V i where P (f |D) now contains in the exponent only the terms from (1) that are quadratic in f . To set up the cavity method, we begin by defining a generating or partition function Z, for a fixed graph, as Z 1 1 X 2 ?X 2 f i? ? f ) (6) Z = df exp(? f T C ?1 f ? 2 2 2? ? 2 i i An auxiliary parameter ? has been added here to allow us to represent the Bayes error as ?  = ? lim??0 (2/V ) ?? hlog ZiD,graphs . The dependence on the dataset D appears in Z only through the sum over ?. It will be more useful toPwrite this P as a sum over all nodes: if ni counts the 2 number of examples seen at node i, then ? fi2? = i ni fi . Even with this replacement, the partition function in equation (6) is not yet suitable for an application of the cavity method since the inverse covariance function cannot be written explicitly and generates interaction terms fi fj between nodes that can be far away from each other along the graph. To eliminate the inverse of the covariance functionR we therefore perform a Fourier transform on the first term in the exponent, P exp(? 12 f T C ?1 f ) ? dh exp(? 12 hT Ch + i i hi fi ). The integral over f then factorizes over the fi , and one finds Z 1 1 ni Z ? dh exp(? hT Ch ? hT diag{( 2 + ?)?1 }h) (7) 2 2 ? Substituting the explicit form of the covariance function (2) into equation (7) we have Z p 1 TX 1 ni Z ? dh exp(? h cq (D ?1/2 AD ?1/2 )q h ? hT diag{( 2 + ?)?1 }h) (8) 2 2 ? q=0 3 where we have written the power in equation (2) as a binomial sum and defined cq = p!/[q!(p ? q)!]a?q (1 ? a?1 )p?q /?. For p > 1, equation (8) still has interactions with more than the immediate neighbours. To solve this we introduce additional variables hq , defined recursively via hq = (D ?1/2 AD ?1/2 )hq?1 for q ? 1 and h0 = h. These definitions are enforced via Dirac delta-functions, each i and q ? 1 R q ?1/2 P ?1/2 q?1 ? exp[ih ? q (hq ?d?1/2 P Aij d?1/2 hq?1 )]. giving a factor ?(hqi ?di hj ) ? dh i i i i j j j Aij dj j Substituting this into equation (8) gives the key advantage that now the adjacency matrix appears only linearly in the exponent, so that we have interactions only across edges of the graph. Rescaling 1/2 ? q , and explicitly separating off the local terms from the the hqi to di hqi and similarly for the h i interactions finally yields Z Y p p p Y Y X 1X 1 di (h0i )2 ? q hq ) ?q cq di h0i hqi ? +i di h dhq dh exp(? Z? i i 2 2 2 n /? + ? i q=0 q=1 q=0 q=1 i (9) Y X q q?1 ? h ? q hq?1 )) ? exp(?i (h + h i j j i (ij) q=1 We now have the partition function of a (complex-valued) Gaussian graphical model. By differentiating log Z with respect to ?, keeping track of ?-dependent prefactors not written above, one finds that the Bayes error is,   1 X 1 di h(h0i )2 i  = lim 1? (10) ??0 V ni /? 2 + ? ni /? 2 + ? i and so we need the marginal distributions of the h0i . This is where the cavity method enters: for a large random graph the structure is locally treelike, so that if node i were eliminated the corresponding subgraphs (locally trees) rooted at the neighbours j ? N (i) of i would become independent [17]. (i) ? j |D) can then be calculated iteratively within these subThe resulting cavity marginals Pj (hj , h graphs, giving the cavity update equations (i) ? j |D) ? exp(? 1 Pj (hj , h 2 Z Y p X q=0 p X 1 dj (h0j )2 ? q hq ) + i dj h j j 2 nj /? 2 + ? q=1 ? k exp(?i dhk dh p X ? q hq?1 + h ? q hq?1 ))P (j) (hk , h ? k |D) (h j k k j k cq dj h0j hqj ? (11) q=1 k?N (j)\i One sees that these equations are solved self-consistently by complex-valued Gaussian distributions (i) with mean zero and covariance matrices Vj . By performing the Gaussian integrals in the cavity update equations (11) explicitly, these equations then take the rather simple form X (i) (j) Vj = (Oj ? XVk X)?1 (12) k?N (j)\i where we have defined the (2p + 1) ? (2p + 1) matrices ? ? c0 + ni /?12 +? 21 c1 . . . 12 cp 0 . . . 0 1 ? ? ?i ? ? 2 c1 ? ? . . .. .. ? ? ? ? 1 ? Oi = di ? ?i ? ?, 2 cp ? ? 0 ?i ? ? ? ? .. .. ? ? . . 0 p,p 0 ?i ? ? ? 0p+1,p+1 ? ? ? X=? ?i ? ? . ? .. i i ? .. . ? ? ? i? 0 ... 0 ? ? ? 0 ? ? .. . 0p,p ? 0 Finally we need to translate these equations to an ensemble of large sparse graphs. Each ensemble is characterised by the distribution p(d) of the degrees di , with every graph that has the desired degree distribution being assigned the same probability. Instead of individual cavity covariance 4 (i) matrices Vj , we need to consider their probability distribution W (V ) across all edges of the graph. Picking at random an edge (i, j) of a graph, the probability that node j will have degree ? because such a node has dj ?chances? of being picked. (The normalisation dj is then p(dj )dj /d, ? Using again the locally treelike structure, the incoming (to node j) factor is the average degree d.) (j) cavity covariances Vk will be i.i.d. samples from W (V ). Thus a fixed point of the cavity update equations corresponds to a fixed point of an update equation for W (V ): * W (V ) = d?1 X p(d)d Z d?1 Y X dV W (V ) ?(V ? (O ? XVk X)?1 ) k k d? d k=1 k=1 + (13) n (i) (j) Because the node label is now arbitrary, we have abbreviated Vj to V , dj to d, Oj to O and Vk to Vk . The average is over the distribution over the number of examples n ? nj at node j in the dataset D. Assuming for simplicity that examples are drawn with uniform input probability across all nodes, this distribution is simply n ? Poisson(?) in the limit of large N and V at fixed ? = N/V . In general equation (13) ? which can also be formally derived using the replica approach [24] ? cannot be solved analytically, but we can solve it numerically using a standard population dynamics method [25]. Once we have W (V ), the Bayes error can be found from the graph ensemble version of equation (10), which is obtained by inserting the explicit expression for h(h0i )2 i in terms of the cavity marginals of the neighbouring nodes, and replacing the average over nodes with an average over p(d):  = lim ??0 * X d p(d) n/? 2 + ? 1? d n/? 2 + ? Z Y d dVk W (Vk ) (O ? k=1 d X !+ XVk X)?1 00 k=1 (14) n The number of examples at the node is again to be averaged over n ? Poisson(?). The subscript ?00? indicates the top left element of the matrix, which determines the variance of h0 . To be able to use equation (14), it needs to be rewritten in a form that remains explicitly nonsingular when n = 0 and ? ? 0. We split off the n-dependence of the matrix inverse by writing Pd O ? k=1 XVk X = M + [d/(n/? 2 + ?)]e0 eT0 , where eT0 = (1, 0, . . . , 0). The matrix inverse appearing above can then be expressed using the Woodbury formula as M ?1 ? M ?1 e0 eT0 M ?1 (n/? 2 + ?)/d + eT0 M ?1 e0 (15) To extract the (0,0)-element (top left) as required we multiply by eT0 ? ? ? e0 . After some simplification the ? ? 0 limit can then be taken, with the result + * Z Y d X 1 = p(d) dVk W (Vk ) (16) n/? 2 + d(M ?1 )00 d k=1 n This has a simple interpretation: the cavity marginals of the neighbours provide an effective Gaussian prior for each node, whose inverse variance is d(M ?1 )00 . The self-consistency equation (13) for W (V ) and the expression (16) for the resulting Bayes error are our main results. They allow us to predict learning curves as a function of the number of examples per node, ?, for arbitrary degree distributions p(d) of our random graph ensemble providing the graphs are sparse, and for arbitrary noise level ? 2 and covariance function hyperparameters p and a. We note briefly that in graphs with isolated nodes (d = 0), one has to be slightly careful as already in the definition of the covariance function (2) one should replace D ? D + ?I to avoid division by zero, taking ? ? 0 at the end. For d = 0 one then finds in the expression (16) that (M ?1 )00 = c01? so that (? + d)(M ?1 )00 = ?(M ?1 )00 = 1/c0 . This is to be expected since isolated nodes each have a separate Gaussian prior with variance c0 . 5 4 Results We will begin by comparing the performance of our new cavity prediction (equation (16)) against the eigenvalue approximation (equation (4)) from [4, 7], for random regular graphs with degree 3 (so that p(d) = ?d,3 ). In this way we can exploit the work of [12], where the quality of the approximation (4) for this case was studied in some detail. Figure 1: (Left) A comparison of the cavity prediction (solid line with triangles) against the eigenvalue approximation (dashed line) for the learning curves for random regular graphs of degree 3, and against simulation results for graphs with V = 500 nodes (solid line with circles). Random walk kernel with p = 1, a = 2; noise level as shown. (Right) As before with p = 10, a = 2. (Bottom) Similarly for Poisson (Erdos-Renyi) graphs with c = 3. As can be seen in figure 1 (left) & (right) the cavity approach is accurate along the entire learning curve, to the point where the prediction is visually almost indistinguishable from the numerical simulation results. Importantly, the cavity approach predicts even the midsection of the learning curve for intermediate values of ?, where the eigenvalue prediction clearly fails. The deviations between cavity theory and the eigenvalue predictions are largest in this central part because at this point fluctuations in the number examples seen at each node have the greatest effect. Indeed, for much smaller ?, the dataset does not contain any examples from many of the nodes, i.e. n = 0 is dominant and fluctuations towards larger n have low probability. For large ?, the dataset typically contains many examples for each node and Poisson fluctuations around the average value n = ? are small. The fluctuation effects for intermediate ? are suppressed when the noise level ? 2 is large, because then the generalisation error in the range of intermediate ? is still fairly close to its initial value (? = 0). But for the smaller noise levels fluctuations in the number of examples for each node can have a large effect, and correspondingly the eigenvalue prediction becomes very poor for intermediate ?. We discuss this further in section 4.1. Comparing figure 1 (left) and (right), it can also be seen that unlike the eigenvalue-based approximation, the cavity prediction for the learning curve does not deteriorate as p is varied towards lower values. Similar conclusions apply with regard to changes of a (results not shown). 6 Next we consider Poisson (Erdos-Renyi) graphs, where each edge is present independently with probability c/V [26]. This leads to a Poisson distribution of degrees, p(d) = e?c cd /d!. Figure 1 (bottom) shows the performance of our cavity prediction for this graph ensemble with c = 3 for a GP with p = 10, a = 2, in comparison to simulation results for V = 500. The cavity prediction clearly outperforms the eigenvalue-based approximation and again remains accurate even in the central part of the learning curve. Taken together, the results for random regular and Poisson graphs clearly confirm our expectation that the cavity prediction for the learning curve that we have derived should be exact for large graphs. It is worth noting that our new cavity prediction will work for arbitrary degree distributions and is limited only by the assumption of graph sparsity. 4.1 Why the eigenvalue approximation fails The derivation of the eigenvalue approximation (4) by Opper in [8] gives some insight into when and how this approximation breaks down. Opper takes equation (6) and uses the replica trick to write hlog ZiD = limn?0 n1 loghZ n iD . The average of Z n is calculated for integer n and then appropriately continued to n ? 0. The required nth power of equation (6) is in our case Z Y n 1 X aT ?1 a 1 X ?X a 2 df a hexp(? hZ n iD = ni (fia )2 ? (f ) )iD f C f ? 2 (17) 2 a 2? i,a 2 i,a i a=1 The dataset average, over ni ? Poisson(?), then gives Z Y n X P a 2 2 ?X a 2 1 X aT ?1 a n (f ) ) (18) f C f +? (e? a (fi ) /2? ? 1) ? hZ iD = df a exp(? 2 a 2 i,a i a=1 i If one now wants to proceed without explicitly exploiting the sparse graph structure, one has to approximate the exponential term in the exponent. Opper does this using a variational approximation for the distribution of the f a , of Gaussian form, and this eventually leads to the approximation (4) for the learning curve. This approach is evidently justified for large ? 2 , where a Taylor expansion of the exponential term in (18) can be truncated after the quadratic term. For small noise levels, on the other hand, the Gaussian variational approach clearly does not capture all the details of the fluctuations in the numbers of examples ni . By comparison, in this paper, using the cavity method we are able to retain the average over D explicitly, without the need to approximate the distribution of the ni . The result of this is that the section of the learning curve where fluctuations in numbers of examples play a large role is captured accurately, while the Gaussian variational (eigenvalue) approach can give wildly inaccurate results there. 5 Conclusions and further work In this paper we have studied the learning curves of GP regression on large random graphs. In a significant advance on the work of [12], we showed that the approximations for learning curves proposed by Sollich [4] and Opper [7] for continuous input spaces can be greatly improved upon in the graph case, by using the cavity method. We argued that the resulting predictions should in fact become exact in the limit of large random graphs. Section 3 derived the learning curve approximation using the cavity method for arbitrary degree distributions. We defined a generating function Z (equation (6)) from which the generalisation error  can be obtained by differentiation. We then rewrote this using Fourier transforms (equation (7)) and introduced additional variables (equation (9)) to get Z into the required form for a cavity approach: the partition function of a complex-valued Gaussian graphical model. By standard arguments we then derived the cavity update equations for a fixed graph (equation (12)). Finally we generalised from these to graph ensembles (equation (13)), taking the limit of large graph size. The resulting prediction for the generalisation error (equation (16)) has an intuitively appealing interpretation, where each node in the graph learns subject to an effective (and data-dependent) Gaussian prior provided by its neighbours. In section 4 we compared our new prediction to the eigenvalue approximation results in [12]. We showed that our new method is far more accurate in the challenging midsection of the learning curves than the eigenvalue version, both for random regular and Poisson graph ensembles (figure 1). 7 Subsection 4.1 discusses why the older approximation, derived from a replica perspective in [7], is inaccurate compared to the cavity method. To retain tractable averages in continuous input spaces, it has to approximate fluctuations in the dataset of the number of examples for each node, thus resulting in the inaccurate predictions seen in figure 1. On graphs one is able to perform this average explicitly when calculating cavity updates and the resulting Bayes error, giving a far more accurate prediction of the learning curves. Although the learning curves predicted using the cavity method cover a broad range of graph ensembles because they apply for arbitrary p(d), there do remain some interesting types of graph ensembles (for instance graphs with community structure) that cannot be generated by imposing only the degree distribution. Indeed, an important assumption in the current work is that small loops are rare whilst in community graphs, where nodes exhibit preferential attachment, there can be many small loops. We are in the process of analysing GP learning on such graphs using the approach of Rogers et al. [27], where community graphs are modelled as having a sparse superstructure joining clusters of densely connected nodes. Following previous studies [12], we have in this paper set the scale of the covariance function by normalising the average prior covariance over all nodes. For the Poisson graph case our learning curve simulations then show, however, that there can be large variations in the local prior variances Cii , while from the Bayesian modelling point of view it would seem more plausible to use covariance functions where all Cii = 1. This could be achieved by pre- and post-multiplying the random walk covariance matrix by an appropriate diagonal matrix. We hope to study this modified covariance function in future, and to extend the cavity prediction for the learning curves to this case. It would also be interesting to expand our approach to model mismatch, where we assume the datagenerating process is a GP with hyperparameters that differ from those of the GP being used for inference. This was studied for continuous input spaces in [10]; equally interesting would be a study of mismatch with a fixed target function as analysed by Opper et al. [8]. It should further be useful to study the case of mismatched graphs, rather than hyperparameters. This is relevant because frequently in real world learning one will have only partial knowledge of the graph structure, for instance in metabolic networks when not all of the pathways have been discovered, or social networks where friendships are continuously being made and broken. Another interesting avenue for further research would be to look at multiple output (multi-task) GPs on graphs, to see if the work of Chai [28] can be extended to this scenario. One would hope that, as seen with the learning curves for single output GPs in this paper, input domains defined by graphs might allow simplifications in the analysis and provide more accurate bounds or even exact predictions. Finally, it would be worth extending the study of graph mismatch to the case of evolving graphs and functions. Here spatio-temporal GP regression could be employed to predict functions changing over time, perhaps including a model based approach as in [29] to account for the evolving graph structure. References [1] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). MIT Press, December 2005. [2] Shun-ichi Amari, Naotake Fujita, and Shigeru Shinomoto. Four types of learning curves. Neural Computation, 4(4):605?618, 1992. [3] M. Opper. Regression with Gaussian processes: Average case performance. Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective. Springer-Verlag, pages 17?23, 1997. [4] P. Sollich. Learning curves for Gaussian processes. In Advances in Neural Information Processing Systems 11, pages 344?350. MIT Press, 1999. [5] F. Vivarelli and M. Opper. General bounds on Bayes errors for regression with Gaussian processes. In Advances in Neural Information Processing Systems 11, pages 302?308. MIT Press, 1999. [6] C. K. I. Williams and F. Vivarelli. Upper and lower bounds on the learning curve for Gaussian processes. Machine Learning, 40(1):77?102, 2000. [7] M. Opper and D. Malzahn. Learning curves for gaussian processes regression: A framework for good approximations. In Advances in Neural Information Processing Systems 14, pages 273?279. MIT Press, 2001. 8 [8] M. Opper and D. Malzahn. A variational approach to learning curves. In Advances in Neural Information Processing Systems 14, pages 463?469. MIT Press, 2002. [9] P. Sollich and A. Halees. Learning curves for Gaussian process regression: Approximations and bounds. Neural Computation, 14(6):1393?1428, 2002. [10] P. Sollich. Gaussian process regression with mismatched models. In Advances in Neural Information Processing Systems 14, pages 519?526. MIT Press, 2002. [11] P. Sollich. Can Gaussian process regression be made robust against model mismatch? In N Lawrence J Winkler and M Niranjan, editors, Deterministic and Statistical Methods in Machine Learning, pages 211?228, Berlin, 2005. Springer. [12] P. Sollich, M. J. Urry, and C. Coti. Kernels and learning curves for Gaussian process regression on random graphs. In Advances in Neural Information Processing Systems 22, pages 1723?1731. Curran Associates, Inc., 2009. [13] M. Herbster, M. Pontil, and L. Wainer. Online learning over graphs. In ICML ?05: Proceedings of the 22nd international conference on Machine learning, pages 305?312, New York, NY, USA, 2005. ACM. [14] M. Herbster and M. Pontil. Prediction on a graph with a perceptron. In Advances in Neural Information Processing Systems 19, pages 577?584. MIT Press, 2007. [15] M. Herbster. Exploiting cluster-structure to predict the labeling of a graph. In Proceedings of the 19th international conference on Algorithmic Learning Theory, pages 54?69. Springer, 2008. [16] M. Belkin, I. Matveeva, and P. Niyogi. Regularization and semi-supervised learning on large graphs. Learning theory, 3120:624?638, 2004. [17] Tim Rogers, Koujin Takeda, Issac P?erez Castillo, and Reimer K?uhn. Cavity approach to the spectral density of sparse symmetric random matricies. Physical Review E, 78(3):31116?31121, 2008. [18] M. Mezard, G. Parisi, and M. A. Virasoro. Random free energies in spin glasses. Le journal de physique - lettres, 46(6):217?222, 1985. [19] M. T. Farrell and A. Correa. Gaussian process regression models for predicting stock trends. Relation, 10:3414, 2007. [20] B. Ferris, D. Haehnel, and D. Fox. Gaussian processes for signal strength-based location estimation. In Proceedings of Robotics: Science and Systems, Philadelphia, USA, August 2006. [21] Sunho Park and Seungjin Choi. Gaussian process regression for voice activity detection and speech enhancement. In International Joint Conference on Neural Networks, pages 2879?2882, Hong Kong, China, 2008. Institute of Electrical and Electronics Engineers (IEEE). [22] A. J. Smola and R. Kondor. Kernels and regularization on graphs. In M. Warmuth and B. Scholkopf, editors, Learning theory and Kernel machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop (COLT), pages 144?158, Heidelberg, 2003. Springer. [23] M. Opper and D. Saad. Advanced mean field methods: Theory and practice. MIT Press, 2001. [24] Reimer K?uhn. Finitely coordinated models for low-temperature phases of amorphous systems. Journal of Physics A, 40(31):9227, 2007. [25] M. M?ezard and G. Parisi. The Bethe lattice spin glass revisited. The European Physical Journal B, 20(2):217?233, 2001. [26] P. Erd?os and A. R?enyi. On random graphs, I. Publicationes Mathematicae (Debrecen), 6:290?297, 1959. [27] Tim Rogers, Conrad P?erez Vicente, Koujin Takeda, and Isaac P?erez Castillo. Spectral density of random graphs with topological constraints. Journal of Physics A, 43(19):195002, 2010. [28] Kian Ming Chai. Generalization errors and learning curves for regression with multi-task Gaussian processes. In Advances in Neural Information Processing Systems 22, pages 279?287. Curran Associates, Inc., 2009. [29] M. Alvarez, D. Luengo, and N. D. Lawrence. Latent force models. In D. van Dyk and M. Welling, editors, Proceedings of the Twelfth International Workshop on Artificial Intelligence and Statistics, pages 9?16, Clearwater Beach, FL, USA, 2009. MIT Press. 9
3981 |@word kong:1 briefly:1 version:2 kondor:1 nd:1 c0:3 twelfth:1 open:1 simulation:5 covariance:20 datagenerating:1 solid:2 recursively:1 electronics:1 initial:1 contains:3 outperforms:1 current:1 comparing:2 analysed:1 yet:1 written:3 numerical:1 partition:4 update:6 intelligence:1 warmuth:1 normalising:1 characterization:1 node:38 location:1 revisited:1 along:3 constructed:1 become:5 scholkopf:1 pathway:1 introduce:1 deteriorate:1 expected:1 indeed:3 frequently:1 mechanic:1 multi:2 inspired:1 ming:1 superstructure:1 nips2009:1 becomes:1 begin:3 provided:1 matched:2 underlying:2 notation:1 string:1 substantially:2 developed:2 whilst:2 c01:1 differentiation:1 nj:2 temporal:1 every:1 exactly:1 uk:2 normally:1 yn:1 before:1 generalised:1 understood:1 local:2 limit:6 joining:1 id:5 subscript:1 fluctuation:8 might:1 studied:3 china:1 challenging:1 limited:1 range:5 averaged:3 woodbury:1 practice:1 pontil:2 area:1 evolving:2 convenient:1 pre:1 regular:7 protein:1 get:1 cannot:4 close:1 writing:1 deterministic:1 demonstrated:1 williams:2 l:2 independently:2 simplicity:1 subgraphs:1 insight:2 continued:1 importantly:1 population:1 variation:1 target:3 play:1 exact:6 neighbouring:2 gps:4 us:1 carl:1 curran:2 matveeva:1 trick:1 element:2 associate:2 trend:1 particularly:1 predicts:1 bottom:2 role:1 enters:1 solved:2 capture:1 calculate:1 electrical:1 region:1 connected:4 decrease:1 pd:1 broken:1 dynamic:1 ezard:1 algebra:1 upon:1 division:1 basis:2 triangle:1 easily:1 joint:1 stock:1 represented:1 tx:1 derivation:1 enyi:1 kcl:2 shortcoming:1 london:4 effective:2 artificial:1 labeling:1 clearwater:1 outside:1 h0:2 whose:3 encoded:1 larger:1 valued:4 solve:2 plausible:1 otherwise:1 amari:1 niyogi:1 winkler:1 seungjin:1 statistic:1 gp:16 transform:1 online:2 sequence:1 eigenvalue:14 advantage:2 evidently:1 parisi:2 interaction:4 inserting:1 relevant:2 loop:3 translate:3 intuitive:1 dirac:1 takeda:2 exploiting:2 chai:2 cluster:2 enhancement:1 extending:1 generating:3 tim:2 ac:2 finitely:1 ij:1 auxiliary:1 predicted:1 come:1 rewrote:1 differ:1 direction:1 closely:1 translating:1 rogers:3 adjacency:2 explains:1 argued:1 wc2r:2 shun:1 generalization:1 around:1 considered:1 exp:12 visually:1 lawrence:2 scope:1 predict:6 algorithmic:1 matthew:2 substituting:2 estimation:1 dhk:1 label:2 largest:1 hope:2 mit:9 clearly:4 gaussian:31 always:1 aim:1 modified:1 rather:2 avoid:1 functionr:1 hj:3 factorizes:1 derived:5 focus:3 vk:5 consistently:1 modelling:1 likelihood:1 indicates:1 hk:1 contrast:1 greatly:1 glass:2 inference:2 dependent:2 treelike:2 inaccurate:3 typically:2 eliminate:1 entire:1 relation:1 expand:1 i1:1 fujita:1 overall:1 classification:1 flexible:1 colt:1 priori:1 exponent:4 fairly:2 marginal:1 field:2 once:1 having:1 reimer:2 eliminated:1 beach:1 broad:3 look:1 icml:1 park:1 anticipated:1 future:2 others:1 belkin:1 neighbour:5 densely:1 individual:1 phase:1 replacement:1 n1:1 detection:1 normalisation:1 multiply:1 physique:1 extreme:1 matricies:1 accurate:10 edge:8 integral:2 partial:1 preferential:1 koujin:2 haehnel:1 fox:1 tree:1 taylor:1 walk:6 desired:1 circle:1 e0:4 isolated:2 theoretical:1 complicates:1 virasoro:1 instance:2 cover:1 lattice:1 deviation:3 rare:1 uniform:1 straightforwardly:1 connect:1 teacher:1 density:2 herbster:3 international:4 retain:2 physic:3 off:2 picking:1 together:1 continuously:1 again:3 central:2 containing:1 leading:1 rescaling:1 account:1 potential:1 de:1 inc:2 coordinated:1 explicitly:8 farrell:1 ad:3 later:1 break:1 picked:1 view:1 publicationes:1 shigeru:1 bayes:10 amorphous:1 square:1 spin:2 accuracy:1 ni:11 acid:1 variance:9 oi:1 ensemble:16 yield:1 nonsingular:1 modelled:1 bayesian:1 accurately:2 multiplying:1 worth:3 strongest:1 mathematicae:1 definition:2 against:4 energy:1 isaac:1 associated:1 di:9 dataset:6 lim:3 subsection:1 knowledge:1 zid:2 appears:2 supervised:1 improved:1 rationalize:1 erd:1 alvarez:1 though:1 wildly:1 just:1 smola:1 hand:1 replacing:1 christopher:1 o:1 propagation:5 dvk:2 multidisciplinary:1 quality:1 perhaps:1 usa:3 effect:3 contain:1 hence:1 assigned:1 analytically:1 regularization:2 symmetric:1 iteratively:1 shinomoto:1 indistinguishable:1 self:3 rooted:1 authorship:1 hong:1 trying:1 wainer:1 demonstrate:1 correa:1 cp:2 temperature:1 characterising:1 fj:1 variational:4 fi:11 physical:2 overview:2 extend:1 interpretation:2 marginals:3 numerically:1 significant:1 imposing:1 consistency:1 mathematics:2 similarly:2 erez:3 dj:9 moving:1 similarity:4 prefactors:1 dominant:1 posterior:7 showed:4 perspective:2 scenario:4 verlag:1 success:1 uncorrupted:1 seen:6 captured:1 additional:3 conrad:1 cii:3 employed:1 dashed:1 semi:1 signal:1 multiple:1 post:1 equally:1 niranjan:1 impact:1 prediction:26 regression:18 expectation:1 poisson:12 df:4 kernel:7 represent:1 achieved:2 robotics:1 c1:2 justified:1 want:1 grow:1 limn:1 appropriately:2 saad:1 unlike:1 hz:2 subject:1 december:1 seem:1 integer:1 noting:2 intermediate:4 split:1 avenue:1 shift:1 expression:3 peter:2 speech:1 proceed:1 york:1 luengo:1 useful:3 characterise:1 transforms:1 locally:3 simplest:1 kian:1 transparently:1 delta:1 track:1 per:1 discrete:8 write:2 hyperparameter:1 ichi:1 key:1 four:1 drawn:3 changing:1 pj:2 clean:2 diffusion:1 replica:4 ht:4 v1:1 graph:87 year:1 sum:3 enforced:1 inverse:5 throughout:2 reasonable:2 almost:1 et0:5 bound:4 hi:1 fl:1 simplification:2 quadratic:2 topological:1 annual:1 activity:1 strength:1 occur:1 constraint:1 generates:1 fourier:2 xvk:4 argument:1 aspect:1 performing:1 relatively:2 department:2 poor:1 remain:1 describes:1 sollich:9 across:3 h0i:5 unity:1 slightly:1 smaller:2 suppressed:1 appealing:1 dv:2 intuitively:1 taken:2 equation:32 remains:2 discus:5 count:1 fail:1 abbreviated:1 eventually:1 vivarelli:2 tractable:1 end:1 studying:1 fia:1 ferris:1 rewritten:1 apply:2 away:1 appropriate:3 generic:2 spectral:2 appearing:1 alternative:1 voice:1 assumes:1 binomial:1 top:2 graphical:2 calculating:1 exploit:1 giving:3 summarises:1 question:1 added:1 already:1 parametric:2 dependence:2 diagonal:2 exhibit:1 hq:10 distance:1 separate:1 separating:1 berlin:1 normalise:1 extent:1 reason:1 assuming:1 length:1 relationship:2 cq:4 providing:1 difficult:1 hlog:2 perform:3 upper:1 hqi:4 datasets:2 finite:2 truncated:1 immediate:1 situation:1 defining:1 extended:1 y1:1 rn:1 varied:1 discovered:1 arbitrary:7 august:1 community:3 introduced:1 required:3 learned:2 nip:1 malzahn:2 able:3 fi2:2 mismatch:4 sparsity:1 summarize:1 including:2 oj:2 belief:5 power:2 suitable:1 greatest:1 force:1 homologous:1 predicting:1 advanced:1 nth:1 older:1 brief:1 attachment:1 extract:1 philadelphia:1 unnormalised:1 dyk:1 prior:9 review:1 expect:1 interesting:5 degree:14 consistent:1 metabolic:1 corrupting:1 editor:3 cd:1 uhn:2 last:1 keeping:1 rasmussen:1 free:1 aij:4 drastically:1 allow:3 h0j:2 perceptron:1 mismatched:2 wide:1 institute:1 characterizing:1 taking:2 differentiating:1 correspondingly:1 sparse:9 issac:1 van:1 regard:1 curve:43 calculated:3 opper:11 transition:1 overcome:1 world:1 author:1 qualitatively:2 made:2 adaptive:1 far:3 social:1 welling:1 approximate:4 erdos:4 cavity:35 confirm:1 incoming:1 assumed:2 spatio:1 spectrum:1 continuous:5 latent:1 why:3 learn:2 bethe:1 molecule:1 robust:1 heidelberg:1 expansion:1 complex:3 european:1 domain:5 diag:3 vj:4 main:2 linearly:1 motivation:1 noise:7 hyperparameters:3 amino:1 ny:1 fails:3 mezard:1 explicit:2 debrecen:1 urry:3 exponential:2 renyi:4 learns:1 down:2 theorem:1 formula:1 friendship:1 specific:1 choi:1 symbol:1 concern:1 workshop:2 ih:1 hexp:1 simply:1 lazy:1 expressed:2 halees:1 springer:4 ch:2 corresponds:2 chance:1 determines:1 dh:6 acm:1 king:2 careful:1 towards:2 shared:1 replace:1 change:1 analysing:1 vicente:1 characterised:2 generalisation:8 except:1 typical:1 determined:1 engineer:1 castillo:2 formally:1 college:2 d1:1 correlated:1
3,292
3,982
A biologically plausible network for the computation of orientation dominance Nuno Vasconcelos Statistical Visual Computing Laboratory University of California San Diego La Jolla, CA 92039 [email protected] Kritika Muralidharan Statistical Visual Computing Laboratory University of California San Diego La Jolla, CA 92039 [email protected] Abstract The determination of dominant orientation at a given image location is formulated as a decision-theoretic question. This leads to a novel measure for the dominance of a given orientation ?, which is similar to that used by SIFT. It is then shown that the new measure can be computed with a network that implements the sequence of operations of the standard neurophysiological model of V1. The measure can thus be seen as a biologically plausible version of SIFT, and is denoted as bioSIFT. The network units are shown to exhibit trademark properties of V1 neurons, such as cross-orientation suppression, sparseness and independence. The connection between SIFT and biological vision provides a justification for the success of SIFT-like features and reinforces the importance of contrast normalization in computer vision. We illustrate this by replacing the Gabor units of an HMAX network with the new bioSIFT units. This is shown to lead to significant gains for classification tasks, leading to state-of-the-art performance among biologically inspired network models and performance competitive with the best non-biological object recognition systems. 1 Introduction In the past decade, computer vision research in object recognition has firmly established the efficacy of representing images as collections of local descriptors of edge orientation. These descriptors are usually based on histograms of dominant orientation, for example, the edge orientation histograms of [1], the SIFT descriptor of [2], or the HOG features of [3]. SIFT, in particular, could be considered today?s default (low-level) representation for object recognition, adopted by hundreds of computer vision papers. The SIFT descriptor is heavily inspired by known computations of the early visual cortex [2], but has no formal detailed connection to computational neuroscience. Interestingly, a parallel, and equally important but seemingly unrelated, development has taken place in this area in the recent past. After many decades of modeling simple cells as linear filters plus ?some? nonlinearity [4], neuroscientists have developed a much firmer understanding of their non-linear behavior. One property that has always appeared essential to the robustness of biological vision is the ability of individual cells to adapt their dynamic range to the strength of the visual stimulus. This adaptation appears as early as in the retina [5], is prevalent throughout the visual cortex [6], and seems responsible for the remarkable ability of the visual system to adapt to lighting variations. Within the last decade, it has been explained by the implementation of gain control in individual neurons, through the divisive normalization of their responses by those of their neighbors [7, 8]. Again, hundreds of papers have been written on divisive normalization, and its consequences for visual processing. Today, there appears to be little dispute about its role as a component of the standard neurophysiological model of early vision [9]. 1 In this work, we establish a formal connection between these two developments. This connection is inspired by recent work on the link between the computations of the standard model and the basic operations of statistical decision theory [10]. We start by formulating the central motivating question for descriptors such as SIFT or HOG, how to represent locally dominant image orientation, as a decision-theoretic problem. An orientation ? is defined as dominant, at a location l of the visual field, if the Gabor response of orientation ? at l, x? (l), is both large and distinct from those of other orientations. An optimal statistical test is then derived to determine if x? (l) is distinct from the responses of remaining orientations. The core of this test is the posterior probability of orientation of the visual stimulus at l, given x? (l). The dominance of orientation ?, within a neighborhood R, is then defined as the expected strength of responses x? (l), in R, which are distinct. This is shown to be a sum of the response amplitudes |x? (l)| across R, with each location weighted by the posterior probability that it contains stimulus of orientation ?. The resulting representation of orientation is similar to that of SIFT, which assigns each point to a dominant orientation and integrates responses over R. The main difference is that a location could contribute to more than one orientation, since the expected strength relies on a soft assignment of locations to orientations, according to their posterior orientation probability. Exploiting known properties of natural image statistics, and the framework of [10], we then show that this measure of orientation dominance can be computed with the sequence of operations of the standard neurophysiological model: simple cells composed of a linear filter, divisive normalization, and a saturating non-linearity, and complex cells that implement spatial pooling. The proposed measure of orientation dominance can then be seen as a biologically plausible version of that used by SIFT, and is denoted by bioSIFT. BioSIFT units are shown to exhibit the trademark properties of V1 neurons: their responses are closely fit by the Naka-Rushton equation [11], and they exhibit an inhibitory behavior, known as cross-orientation suppression, which is ubiquitous in V1 [12]. We note, however, that our goal is not to provide an alternative to SIFT. On the contrary, the formal connection between findings from computer vision and neuroscience provides additional justification to both the success of SIFT in computer vision, and the importance of divisive normalization in the visual cortex, as well as its connection to the determination of orientation dominance. The main practical benefit of bioSIFT is to improve the performance of biologically plausible recognition networks, whose performance it brings close to the level of the state of the art in computer vision. In the process of doing this, it points to the importance of divisive normalization in vision. While such normalization tends to be justified as a means to increase robustness to variations of illumination, a hypothesis that we do not dispute, it appears to make a tremendous difference even when such variations do not hold. We illustrate these points through object recognition experiments with HMAX networks [13]. It is shown that the simple replacement of Gabor filter responses with the normalized orientation descriptors of bioSIFT produces very significant gains in recognition accuracy. These gains hold for standard datasets, such as Caltech101, where lighting variations are not a substantial nuisance. This points to the alternative hypothesis that the fundamental role of contrast normalization is to determine orientation dominance. The hypothesis is substantiated by the fact that the bioSIFT enhanced HMAX network substantially outperforms the previous best results in the literature of biologically-inspired recognition networks [14, 15]. While these networks implement a number of operations similar to those of bioSIFT, including the use of contrast normalized units, they do not have a precise functional justification (such as the determination of orientation dominace), lack a well defined optimality criterion, and do not have a rigorous statistical interpretation. The importance of these properties is further illustrated by experiments in a dataset composed exclusively of natural scenes [16], which (unlike Caltech) fully matches the assumptions under which bioSIFT is optimal (natural image statistics). In this dataset, the HMAX network with the bioSIFT features has performance identical to that of very recent state-of-the-art computer vision methods. 2 The bioSIFT Features We start by describing the implementation of the bioSIFT network in detail. We lay out the computations, establish their conformity with the standard neurophysiological model, and analyze the statistical meaning of the computed features. 2 2.1 Motivation Various authors have argued that perceptual systems compute optimal decisions tuned to the statistics of natural stimuli [17, 18, 19]. The ubiquity of orientation processing in visual cortex suggests that the estimation of local orientation is important for tasks such as object recognition. This is reinforced by the success, in computer vision, of algorithms based on SIFT or SIFT-like descriptors. While the classical view was that the brain simply performs a linear decomposition into orientation channels, through Gabor filtering, SIFT representations emphasize the estimation of dominant orientation. The latter is a very non-linear operation, involving the comparison of response strength across orientation channels, and requires inter-channel normalization. In SIFT, this is performed implicitly, by combining the computation of gradients with some post-processing heuristics. More formal estimates of dominant orientation can be obtained by formulating the problem in decisiontheoretic terms, and deriving optimal decision rules for its solution. For this, we assume that the visual system infers dominant orientation from a set of visual features x ? RM , which measure stimulus amplitude at each orientation. In this work, we assume these features to be the set of responses Xi = I ? Gi of the stimulus I, to a bank of Gabor filters Gi . Here, Gi is the filter of ith orientation, and ? convolution. In principle, determining whether there is a dominant orientation requires the joint inspection of all feature channels Xi . Statistically, this implies modeling the joint feature distribution and is intractable for low-level vision. A more tractable question is whether the ith channel responses, Xi , are distinct from those of the other channels, Xj , j 6= i. Letting ? denote the channel orientation, i.e. PX|? (x|i) = PXi (x), this question can be posed as a classification problem with two hypotheses of label Y ? {0, 1}, where ? Y = 1 if the ith channel responses are distinct, i.e. P (X = x, ? = i) 6= P (X = x, ? 6= i), ? Y = 0 otherwise, i.e. P (X = x, ? = i) = P (X = x, ? 6= i). This problem has class-conditional densities P (X = x, ? = i|Y = 1) = P (X = x, ? = i) = PX|? (x|i)P? (i) X P (X = x, ? = i|Y = 0) = P (X = x, ? 6= i) = PX|? (x|j)P? (j) j6=i and the posterior probability of the ?distinct? hypothesis given an observation from channel i is P (Y = 1|X = x, ? = i) = PX|? (x|i)P? (i) P = P?|X (i|x) j PX|? (x|j)P? (j) (1) where we have assumed that PY (0) = PY (1) = 1/2. Given the response xi (l) of Xi at location l ? R, the minimum probability of error (MPE) decision rule is to declare it distinct when PX (xi (l))P? (i) P?|X (i|xi (l)) = P i j PXj (xi (l))P? (j) ? 1 . 2 (2) While this test determines if the responses of Xi are distinct from those of Xj6=i , it does not determine if Xi is dominant: Xi could be distinct because it is the only feature that does not respond to the stimulus in R. The second question is to determine if the responses of Xi are both distinct and large. This requires a new random variable  |xi |, if Y = 1 S(xi ) = (3) 0, if Y = 0. which measures the strength (absolute value) of the distinct responses. The expected strength of distinct responses in R is then Z EY,X|? [S(X)|? = i] = |x|PY |X,? (1|x, i)PX|? (x|i)dx (4) Z = |x|P?|X (i|x)PXi (x)dx. (5) The empirical estimate of (5) from the sample xi (l), l ? R, is 1 X \ S(X |xi (l)|P?|X (i|xi (l)). i )R = |R| l 3 (6) (a) (b) (e) (f) (c) (d) (g) (h) Figure 1: bioSIFT computations for given orientation ?: (a) an image, (b) response of Gabor filter of orientation ?, (c) posterior probability map for orientation ?, (d) orientation dominance measure for channel ?; (e),(f),(g),(h) the image, Gabor response, posterior probability, and dominance measure of same channel for a contrast-reduced version of the image. This measure of the dominance of the ith orientation is a sum of the response amplitudes |xi (l)| across R, with each location weighted by the posterior probability that it contains stimulus of that orientation. It is similar to the measure used by SIFT, which assigns each point to a dominant orientation and integrates responses over R. The main difference is that a location could contribute to more than one orientation, since the expected strength relies on a soft assignment of locations to orientations, according to their posterior orientation probability. Figure 1 illustrates the computations of (6) for the image shown in a). The response of a Gabor filter of orientation ? = 3?/4 is shown in b), and the orientation probability map P?|X (i|xi ) in c). Note that these probabilities are much smaller than the Gabor responses in the body of the starfish, where the image is textured but there is no significant structure of orientation ?. On the other hand, they are largest for the locations where the orientation is dominant. Figure 1 d) shows the final dominance measure. The combined multiplication by the Gabor responses and averaging over R magnifies the responses where the orientation is dominant, suppressing the details due to texture or noise. This can be seen by comparing b) and d). Overall, (6) is large when the ith orientation responses are 1) distinct from those of other channels and 2) large. It is small when they are either indistinct or small. One interesting property is that it penalizes large responses of Xi that are not informative of the presence of stimuli with orientation i. Hence, increasing the stimulus contrast does not increase th \ S(X orientation. This can be i )R when responses xi (l) cannot be confidently assigned to the i seen in Figure 1 f) and h), where the Gabor response and dominance measure are shown for a lowcontrast replica of the image of a). While the Gabor responses at low (f) and high (b) contrasts are substantially different, the dominance measure (d and h) stays almost constant. It follows that (6) implements contrast normalization, a topic to which we will return in later sections. It is worth noting that such normalization is accomplished without modeling joint distributions of response across orientations. On the contrary, all quantities in (6) are scalar. 3 Biological plausibility In this section we study the biological plausibility of the orientation dominance measure of (6). 3.1 Natural image statistics Extensive research on the statistics of natural images has shown that the responses of bandpass features follow the generalized gaussian distribution (GGD) !! ? |x| ? exp ? (7) PX (x; ?, ?) = 2??(1/?) ? R? where ?(z) = 0 e?t tz?1 , dt, t > 0 is the Gamma function, ? is a scale and ? a shape parameter. The biological plausibility of statistical inference for GGD stimuli was extensively studied in 4 to C1 ? ... | .|?i | .|?i ? ? |xi(l) | .. . .. . ... log P(xi(l)|?i) | .|?k ?i i input ? ? ? log P(xi(l)|?k) | .|?k ?(Xi)R C1 layer ? S1 layer Multi-scale image image Figure 2: One channel of the bioSIFT network. The large dashed box implements the computations of the simple cell, and the small one those of the complex cell. The simple cell computes the contribution of channel i to the expected value of the dominant response at pixel x, indicated by a filled box. Spatial pooling by the complex cell determines the channel?s contribution to the expected value of the dominant response within the pooling neighborhood. [10]. This work shows that various fundamental computations in statistics can indeed be computed biologically when a maximum a posteriori (MAP) estimate is adopted for ?? , using a conjugate (Gamma) prior. This MAP estimate is ??1/? ? ? n X ? ? (8) |x(j)|? + ? ?? ?M AP = ? n + ? j=1 where ? and ? are the prior hyperparameters, and x(j) a sample of training points. As is usual in Bayesian inference, the hyperparameter values are important when the sample is too small to enable reliable inference. This is not the case for the current work, where the estimates remain constant over a substantial range of their values. Hence, we simply set ? = 10?3 and ? = 1 in all experiments. For natural images, the value of ? is quite stable. We use ? = 0.5, (determined by fitting the GGD to a large set of images) in our experiments. 3.2 Biological computations 1 To derive a biologically plausible form of (6) we start by assuming that P? (i) = M . This is mostly for simplicity, the discussion could be generalized to account for any prior distribution of orientations. Under this assumption, using (1) PX|? (x|? = j) PX (x) P?|X (? = j|x) = P =P j (9) k PX|? (x|? = k) k PXk (x) and \ S(X i )R ? X |xi (l)|?i [log PX1 (xi (l)), . . . , log PXM (xi (l))] (10) l?R where ?k is the classical softmax activation function exp(qk ) ?k (q1 , ..., qn ) = Pn , j=1 exp(qj ) qj the log-likelihood (up to constants that cancel in (11)) qj = log PXj (xi (l)) = ??(xi (l); ?j ) ? Kj ? (11) (12) and, from (7) with the MAP estimate of ? from (8) and the responses in R as training sample, ! X |x|? ? 1 ? ?(x; ?k ) = ; ?k = |xk (l)| + ? ; Kj = log ?j = log ?j . (13) ?k |R| + ? ? l?R 5 0.4 0.25 0.4 0 bioSIFT N?R eqn 10% 20% 30% 40% 50% 0.2 0.15 0.2 0.1 0.1 0.01 0.1 0 0.001 1.0 0.01 contrast contrast a) b) 0.1 ?6 ?8 ?12 0 1.0 0.2 c) 0.25 channel 2 channel 2 0.3 ?0.2 0 0.2 1 0.2 0.15 0.4 0.1 0.6 0.05 f) 0.8 0.35 ?0.4 ?1 0.4 0.6 response d) ?0.6 e) ?4 ?10 0.05 0 0.001 biosift gabor ?2 0.3 log(probability) response 0.3 response 0.35 ?0.5 0 channel 1 g) 0.5 1 0 0.2 0.4 0.6 channel 1 0.8 1 h) Figure 3: (a) COS in real neurons(from [12]), and (b) in bioSIFT features (c) Contrast response in bioSIFT features and corresponding Naka-Rushton fit (d)distributions of Gabor and bioSIFT amplitudes (e) Example of Orientation selectivity (f) sample image and maximum biosift response at each location (g,h) conditional histograms of adjacent channels for Gabor(g) and bioSIFT(h) features. The computations of (11)-(13) are those performed by simple cells in the standard neurophysiological model of V1. A bank of linear filters is applied at each location l of the field of view. This produces the Gabor responses xi (l). Each response xi (l) is divisively normalized by the sum of responses in the neighborhood R, for each orientation channel k, using (13). Notice that this implies that the conditional distribution of responses of a channel is learned locally, from the sample of responses in R. Altogether, (12) implements the computations of a divisively normalized simple cell. Finally, the softmax ?k is a multi-way sigmoidal non-linearity which replicates the well known saturating behavior of simple cells. The computation of the orientation dominance measure by (10) then corresponds to a complex cell, which pools the simple cell responses in R, modulated by the magnitude of the underlying Gabor responses. This produces each channel?s contribution to the bioSIFT descriptor. A graphical description of the network is presented in Figure 2. 3.3 Naka-Rushton fit In addition to replicating the standard model of V1, the biological plausibility of the bioSIFT features can be substantiated by checking if they reproduce well-established properties of neuronal responses. One characteristic property of neural responses of monkey and cat V1 is the tightness with which they can be fit by the Naka-Rushton equation [11]. The equation describes the average response to a sinusoidal grating of contrast c as R = Rmax cq50 cq + cq (14) where Rmax is the maximum mean response, c50 is the semi-saturation contrast i.e. the contrast at which the response is half the saturation value. The parameter q, which determines the steepness of the curve, is remarkably stable for V1 neurons, where it takes values around 2 [20]. The fit between the contrast response of a bioSIFT unit and the Naka-Rushton function was determined, using the procedure of [11], and is shown in Figure 3 c). As in biology, the Naka-Rushton model fits the bioSIFT data quite well. Over multiple trials, the q parameter for the best fitting curve is stable and stays in the interval (1.7, 2.1). 3.4 Inhibitory effects It is well known that V1 neurons have a characteristic inhibitory behavior, known as crossorientation suppression (COS) [12, 7, 21]. This suppression is observed by measuring the response of a neuron, tuned to an orientation ?, to a sinusoidal grating of orthogonal orientation (? ? 90? ). When presented by itself, the grating barely evokes a response from the neuron. However, if superimposed with a grating of another orientation, it significantly reduces the response of the neuron to the latter. To test if the bioSIFT features exhibit COS, we repeated the set of experiments reported 6 in [12]. These consist of measuring a simple cell response to a set of sinusoidal plaids obtained by summing 1) a test grating oriented along the cell?s preferred orientation, and 2) a mask grating of orthogonal orientation. The test and the mask have the same frequency as the cell?s Gabor filter. The cell response is recorded as a function of the contrast of the gratings. Figure 3 a) shows the results reported in [12], for a real neuron. The stimuli are shown on the left and the neuron?s response on the right. Note the suppression of the latter when the mask contrast increases. The response of the bioSIFT simple cell, shown in Figure 3 b), is identical to that of the neuron. From a functional point of view, the great advantage of COS is the resulting increase in selectivity of the orientation channels. This is illustrated in Figure 3 (e). The figure shows the results of an experiment that measured the response of 12 Gabor filters of orientation in [0o , 180o ] to a horizontal grating. While both the first and twelfth Gabor filters have relatively large responses to this stimulus, the twelfth channel of bioSIFT is strongly suppressed. When combined with the contrast invariance of Figure 1, this leads to a representation with strong orientation discrimination and robustness to lighting variations. An example of this is shown in Figure 3 (f) which shows the value of the dominance measure for the most dominant orientation at each image location (in ?split screen? with the original image). Note how the bioSIFT features capture information about dominant orientation and object shape, suppressing uninformative or noisy pixels. 3.5 Independence and sparseness Barlow [18] argued that the goal of sensory systems is to reduce redundancy, so as to produce statistically independent responses. A known property of the responses of bandpass features to natural images is a consistent pattern of higher order dependence, characterized by bow-tie shaped conditional distributions between feature pairs. This pattern is depicted in Figure 3 g), which shows the histogram of responses of a Gabor feature, conditioned on the response of the co-located feature of an adjacent orientation channel. Simoncelli [22] showed that divisively normalizing linear filter responses reduces these higher-order dependencies, making the features independent. As can be seen from (10), (12), and (13), the bioSIFT network divisively normalizes each Gabor response by the sum, across the spatial neighborhood R, of responses from each of the Gabor orientations (11). It is thus not surprising that, as shown in Figure 3 h), the conditional histograms of bioSIFT features are zero outside a small horizontal band around the horizontal axis. This implies that they are independent (knowledge of the value of one feature does not modify the distribution of responses of the other).This is a consistent observation across bioSIFT feature pairs. Another important, and extensively researched, property of V1 responses is their sparseness. Channel sparseness is closely related to independence across channels. Sparse representations have several important advantages, such as increased generalization ability and energy efficiency of neural decision-making circuits. Given the discussion above, it is not surprising that the contrast normalization inherent to the bioSIFT representation also makes it more sparse. This is shown in Figure 3 d), which compares the sparseness of the responses of both a Gabor filter and a bioSIFT unit to a natural image. It is worth noting that these properties have not been exploited in the SIFT literature itself. For example, independence could lead to more efficient implementations of SIFT-based recognizers than the standard visual words approach, which requires an expensive quantization of SIFT features with respect to a large codebook. We leave this as a topic for future research. 4 Experimental Evaluation In this section, we report on experiments designed to evaluate the benefits, for recognition, of the connections between SIFT and the standard neurophysiological model. 4.1 Biologically inspired object recognition Biologically motivated networks for object recognition have been recently the subject of substantial research [13, 23, 14, 15]. To evaluate the impact of adding bioSIFT features to these networks, we considered the HMAX network of [13], which mimics the structure of the visual cortex as a cascade of alternating simple and complex cell layers. The first layer encodes the input image as a set of complex cell responses, and the second layer measures the distance between these responses and a set of learned prototypes. The vector of these distances is then classified with a linear SVM. 7 Model Base HMAX [13] + enhancements [23] Pinto et al. [14] Jarrett et al [15] Lazebnik et al. [16] Zhang et al. [24] NBNN [25] Yang et al. [26] base bioSIFT HMAX +enhancements 30 training images/cat. 42 56 65 65.5 64.6 ? 0.8 66.2 ? 0.5 70.4 73.2 ? 0.5 54.5 69.3 ? 0.3 Model Fei-Fei et al [27] Lazebnik et al. [16] Yang et al [26] Kernel Codebooks [28] HMAX with bioSIFT Performance 65.2 81.4 ? 0.5 80.3 ? 0.9 76.7 ? 0.4 80.1 ? 0.6 Figure 4: Classification Results on Caltech-101(left) and the Scene Classification Database(right) For this evaluation, each unit of the first layer was replaced by a bioSIFT unit, implemented as in Figure 2. The experimental setup is similar to that of [23]: multi-class classification on Caltech101 (with the size of the images reduced so that their height is 140) using 30 images/object for training and at-most 50 for testing. The baseline accuracy of [13] was 42%. The work of [23] introduced several enhancements that were shown to considerably improve this baseline. Two of these enhancements, sparsification and inhibition, were along the lines of the contributions discussed in this work. Others, such as limiting receptive fields to restrict invariance, and discriminant selection of prototypes could also be combined with bioSIFT. The base performance of the network with bioSIFT (54.5%) is superior to that of all comparable extensions of [23] (49%). This can be attributed to the fact that those extensions are mostly heuristic, while those now proposed have a more sound decision-theoretical basis. In fact, the simple addition of bioSIFT features to the HMAX network outperforms all extensions of [23] up to the prototype selection stage (54%). When bioSIFT is complemented with limited C2 invariance and prototype selection the performance improves to 69%, which is better than all results from [23]. In fact, the HMAX network with bioSIFT outperforms the state-of-the-art 1 performance (65.5%) for biologically inspired networks [15]. This improvement is interesting, given that these networks also implement most of the operations of the bioSIFT unit (filtering, normalization, pooling, saturation, etc.). The main difference is that this is done without a clear functional justification, optimality criteria, or statistical interpretation. In result, the sequence of operations is not the same, there is no guarantee that normalization provides optimal estimates of orientation dominance, or even that it corresponds to optimal statistical learning, as in (8). 4.2 Natural scene classification When compared to the state-of-the-art from the computer vision literature, the HMAX+bioSIFT network, does not fare as well. Most notably, it has worse performance than the method of Yang et al. [26], which holds the current best results for this dataset (single descriptor methods). This is explained by two main reasons. The first is that the networks are not equivalent. Yang et. al rely on a sparse coding representation in layer 2, which is likely to be more effective than the simple Gaussian units of HMAX. This problem could be eliminated by combining bioSIFT with the same sparse representation, something that we have not attempted. A second reason is that bioSIFT is not exactly optimal for Caltech, because this dataset contains various classes with many non-natural images. To avoid this problem, we have also evaluated the bioSIFT features on the scene classification task of [16]. Using the same HMAX setup, a simple linear classifier and 3000 layer 2 units, the network achieves a classification performance of 80.1% (see Figure 4). This is a substantial improvement, since these results are nearly identical to those of Yang et al. [26], and better than many of those of other methods from the computer vision literature. Overall, these results suggest that orientation dominance is an important property for visual recognition. In particular, the improved performance of the bioSIFT units cannot be explained by the importance of contrast normalization, since this is not a major nuisance for the datasets considered, it is also implemented by the other networks, bioSIFT is not optimized to normalize contrast, and it is unlikely that constrast variations would be more of an issue on Caltech than on the natural scene dataset. 1 [14] reports 65%, but for a network with a much larger number of units (SVM dimension) than what is used by all other networks. Our implementation of their network with comparable parameters only achieved 42%. 8 References [1] W. T. Freeman and M. Roth, ?Orientation histograms for hand gesture recognition,? in IEEE Intl. Wkshp. on Automatic Face and Gesture Recognition, 1995. [2] D. G. Lowe, ?Distinctive image features from scale-invariant keypoints,? IJCV, vol. 60(2), pp. 91?110, 2004. [3] N. Dalal and B. Triggs, ?Histograms of oriented gradients for human detection,? in Proc. IEEE Conf. CVPR, 2005. [4] D. H. Hubel and T. N. Wiesel, ?Receptive fields, binocular interaction, and functional architecture in the cat?s visual cortex,? Journal of Physiology, vol. 160, 1962. [5] R. Shapley and J. D. Victor, ?The contrast gain control of the cat retina,? Vision Research, vol. 19, pp. 431? 434, 1979. [6] S. E. Palmer, Vision Science: Photons to Phenomenology. The MIT Press, 1999. [7] D. Heeger, ?Normalization of cell responses in cat striate cortex,? Visual Neuroscience, vol. 9, 1992. [8] M. Carandini, D. J. Heeger, and J. A. Movshon, ?Linearity and normalization in simple cells of macaque primary visual cortex,? Journal of Neuroscience, vol. 17, pp. 8621?8644, 1997. [9] M. Carandini, J. B. Demb, V. Mante, D. J. Tolhurst, Y. Dan, B. A. Olshausen, J. L. Gallant, and N. C. Rust, ?Do we know what the early visual system does?,? Journal of Neuroscience, vol. 25, 2005. [10] D. Gao and N. Vasconcelos, ?Decision-theoritic saliency: computational principles, biological plausibility, and implications for neurophysiology and psychophysics,? Neural Computation, vol. 21, 2009. [11] M. Chirimuuta and D. J. Tolhurst, ?Does a bayesian model of v1 contrast coding offer a neurophysiological account of contrast discrimination?,? Vision Research, vol. 45, pp. 2943?2959, 2005. [12] M. Carandini, Receptive fields and suppressive fields in the early visual system. MIT Press, 2004. [13] T. Serre, L. Wolf, and T. Poggio, ?Object recognition with features inspired by visual cortex,? in IEEE Conf. CVPR, 2005. [14] N. Pinto, D. Cox, and J. Dicarlo, ?Why is real-world visual object recognition hard?,? PLoS Computational Biology, 2008. [15] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. Lecun, ?What is the best multi-stage architecture for object recognition?,? in Proc. IEEE International Conference on Computer Vision, 2009. [16] S. Lazebnik, C. Schmid, and J. Ponce, ?Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,? in CVPR, 2006. [17] F. Attneave, ?Informational aspects of visual perception,? Psychological review, vol. 61, pp. 183?193, 1954. [18] H. B. Barlow, ?Redundancy reduction revisited,? Network: Computation in Neural Systems, vol. 12, 2001. [19] D. C. Knill and W. Richards, Perception as Bayesian inference. Cambridge University Press, 1996. [20] D. G. Albrecht and D. B. Hamilton, ?Striate cortex of monkey and cat: contrast response function,? Journal of Neurophysiology, vol. 48, pp. 217?237, 1982. [21] M. C. Morrone, D. C. Burr, and L. Maffei, ?Functional implications of cross orientation inhibition of cortical visual cells i. neurophysiological evidence,? Proc. Royal Society London B, vol. 216, pp. 335? 354, 1982. [22] M. J. Wainwright, O. Schwartz, and E. P. Simoncelli, ?Natural image statistics and divisive normalization: Modeling nonlinearities and adaptation in cortical neurons,? in Probabilistic Models of the Brain: Perception and Neural Function, pp. 203?222, MIT Press, 2002. [23] J. Mutch and D. Lowe, ?Object class recognition and localization using sparse features with limited receptive fields,? IJCV, vol. 80, pp. 45?57, 2008. [24] H. Zhang, A. Berg, M. Maire, and J. Malik, ?Svm-knn: Discriminative nearest neigbor classification for visual category recognition,? in Proc. IEEE Conf CVPR, 2006. [25] O. Boiman, E. Shechtman, and M. Irani, ?In defense of nearest-neighbor based image classification,? in Proc. IEEE Conf. CVPR, 2008. [26] J. Yang, K. Yu, Y. Gong, and T. Huang, ?Linear spatial pyramid matching using sparse coding for image classification,? in Proc. IEEE Conf. CVPR, 2009. [27] L. Fei-Fei and P. Perona, ?A bayesian heirarchical model for learning natural scene categories,? in Proc. IEEE Conf CVPR, 2005. [28] J. C. van Gemert, J. M. Geusebroek, C. J. Veenman, and A. W. M. Smeulders, ?Kernel codebooks for scene categorisation,? in Proc ECCV, 2008. 9
3982 |@word neurophysiology:2 trial:1 cox:1 version:3 dalal:1 wiesel:1 seems:1 triggs:1 twelfth:2 decomposition:1 q1:1 shechtman:1 reduction:1 contains:3 efficacy:1 exclusively:1 tuned:2 interestingly:1 suppressing:2 past:2 outperforms:3 current:2 comparing:1 surprising:2 activation:1 dx:2 written:1 informative:1 shape:2 designed:1 discrimination:2 half:1 inspection:1 xk:1 ith:5 core:1 provides:3 tolhurst:2 contribute:2 location:13 codebook:1 revisited:1 sigmoidal:1 zhang:2 height:1 along:2 c2:1 ijcv:2 fitting:2 shapley:1 dan:1 burr:1 inter:1 notably:1 indeed:1 dispute:2 mask:3 magnifies:1 expected:6 behavior:4 multi:4 brain:2 inspired:7 freeman:1 informational:1 researched:1 little:1 increasing:1 unrelated:1 linearity:3 underlying:1 circuit:1 what:3 rmax:2 substantially:2 monkey:2 developed:1 finding:1 sparsification:1 guarantee:1 tie:1 exactly:1 rm:1 classifier:1 schwartz:1 control:2 unit:14 maffei:1 hamilton:1 rushton:6 declare:1 local:2 modify:1 tends:1 consequence:1 demb:1 ap:1 plus:1 studied:1 suggests:1 co:5 limited:2 palmer:1 range:2 statistically:2 jarrett:2 practical:1 responsible:1 lecun:1 testing:1 implement:7 procedure:1 maire:1 area:1 empirical:1 gabor:24 significantly:1 cascade:1 physiology:1 word:1 matching:2 suggest:1 cannot:2 close:1 selection:3 py:3 equivalent:1 map:5 roth:1 simplicity:1 assigns:2 constrast:1 rule:2 deriving:1 variation:6 justification:4 limiting:1 diego:2 today:2 heavily:1 enhanced:1 hypothesis:5 recognition:19 expensive:1 located:1 lay:1 richards:1 database:1 divisively:4 role:2 observed:1 capture:1 ranzato:1 plo:1 substantial:4 dynamic:1 distinctive:1 localization:1 efficiency:1 textured:1 basis:1 joint:3 various:3 cat:6 veenman:1 substantiated:2 distinct:13 effective:1 london:1 neighborhood:4 outside:1 whose:1 heuristic:2 posed:1 plausible:5 quite:2 larger:1 tightness:1 otherwise:1 cvpr:7 ability:3 statistic:7 gi:3 knn:1 itself:2 noisy:1 final:1 seemingly:1 sequence:3 advantage:2 interaction:1 adaptation:2 combining:2 bow:1 description:1 normalize:1 exploiting:1 enhancement:4 intl:1 produce:4 leave:1 muralidharan:1 object:13 illustrate:2 derive:1 gong:1 measured:1 nearest:2 strong:1 grating:8 implemented:2 implies:3 plaid:1 closely:2 filter:13 human:1 enable:1 argued:2 generalization:1 biological:9 extension:3 hold:3 around:2 considered:3 exp:3 great:1 nbnn:1 major:1 achieves:1 early:5 estimation:2 proc:8 integrates:2 bag:1 label:1 pxj:2 largest:1 weighted:2 mit:3 always:1 gaussian:2 pn:1 avoid:1 starfish:1 derived:1 pxi:2 ponce:1 improvement:2 prevalent:1 likelihood:1 superimposed:1 contrast:24 rigorous:1 suppression:5 baseline:2 posteriori:1 inference:4 unlikely:1 perona:1 reproduce:1 pixel:2 issue:1 overall:2 classification:11 orientation:78 among:1 denoted:2 development:2 art:5 spatial:5 softmax:2 psychophysics:1 field:7 vasconcelos:2 shaped:1 eliminated:1 identical:3 biology:2 yu:1 cancel:1 nearly:1 future:1 mimic:1 report:2 stimulus:13 others:1 inherent:1 retina:2 oriented:2 composed:2 gamma:2 individual:2 replaced:1 replacement:1 detection:1 neuroscientist:1 evaluation:2 replicates:1 implication:2 edge:2 poggio:1 orthogonal:2 filled:1 penalizes:1 theoretical:1 psychological:1 increased:1 modeling:4 soft:2 measuring:2 heirarchical:1 assignment:2 hundred:2 recognizing:1 too:1 motivating:1 reported:2 dependency:1 considerably:1 combined:3 density:1 fundamental:2 international:1 stay:2 probabilistic:1 pool:1 again:1 central:1 recorded:1 huang:1 worse:1 tz:1 conf:6 leading:1 return:1 albrecht:1 account:2 sinusoidal:3 photon:1 nonlinearities:1 coding:3 performed:2 view:3 later:1 lowe:2 doing:1 analyze:1 mpe:1 competitive:1 start:3 parallel:1 contribution:4 smeulders:1 accuracy:2 descriptor:9 qk:1 characteristic:2 reinforced:1 boiman:1 saliency:1 bayesian:4 kavukcuoglu:1 lighting:3 worth:2 j6:1 classified:1 energy:1 frequency:1 pp:9 nuno:2 naka:6 attneave:1 attributed:1 gain:5 dataset:5 carandini:3 knowledge:1 infers:1 ubiquitous:1 improves:1 amplitude:4 appears:3 higher:2 dt:1 follow:1 response:76 improved:1 mutch:1 done:1 box:2 strongly:1 evaluated:1 stage:2 binocular:1 hand:2 eqn:1 horizontal:3 replacing:1 lack:1 brings:1 indicated:1 olshausen:1 effect:1 serre:1 normalized:4 barlow:2 hence:2 assigned:1 alternating:1 irani:1 laboratory:2 illustrated:2 adjacent:2 nuisance:2 criterion:2 generalized:2 theoretic:2 performs:1 image:31 meaning:1 lazebnik:3 novel:1 recently:1 superior:1 functional:5 rust:1 discussed:1 interpretation:2 fare:1 significant:3 cambridge:1 automatic:1 px1:1 nonlinearity:1 replicating:1 stable:3 cortex:10 recognizers:1 inhibition:2 etc:1 base:3 dominant:17 something:1 posterior:8 recent:3 firmer:1 showed:1 jolla:2 selectivity:2 success:3 accomplished:1 exploited:1 caltech:4 victor:1 seen:5 minimum:1 additional:1 ey:1 determine:4 dashed:1 semi:1 multiple:1 simoncelli:2 sound:1 reduces:2 keypoints:1 match:1 determination:3 adapt:2 cross:3 plausibility:5 characterized:1 gesture:2 wkshp:1 offer:1 post:1 equally:1 impact:1 involving:1 basic:1 vision:19 histogram:7 normalization:18 represent:1 kernel:2 pyramid:2 achieved:1 cell:23 c1:2 justified:1 addition:2 remarkably:1 uninformative:1 interval:1 suppressive:1 unlike:1 pooling:4 subject:1 contrary:2 presence:1 noting:2 yang:6 split:1 independence:4 fit:6 xj:1 architecture:2 restrict:1 reduce:1 codebooks:2 prototype:4 qj:3 whether:2 motivated:1 defense:1 movshon:1 detailed:1 pxk:1 clear:1 locally:2 extensively:2 band:1 category:3 reduced:2 inhibitory:3 notice:1 neuroscience:5 reinforces:1 hyperparameter:1 vol:13 steepness:1 dominance:18 redundancy:2 replica:1 v1:11 sum:4 respond:1 place:1 throughout:1 almost:1 evokes:1 decision:9 comparable:2 layer:8 mante:1 strength:7 categorisation:1 fei:4 scene:8 encodes:1 aspect:1 optimality:2 formulating:2 xj6:1 px:11 relatively:1 according:2 conjugate:1 across:7 smaller:1 remain:1 describes:1 suppressed:1 biologically:11 s1:1 making:2 explained:3 invariant:1 taken:1 equation:3 c50:1 describing:1 know:1 letting:1 gemert:1 tractable:1 adopted:2 operation:7 phenomenology:1 ubiquity:1 alternative:2 robustness:3 altogether:1 original:1 remaining:1 graphical:1 establish:2 classical:2 society:1 malik:1 neigbor:1 question:5 quantity:1 receptive:4 primary:1 dependence:1 usual:1 striate:2 exhibit:4 gradient:2 distance:2 link:1 conformity:1 topic:2 discriminant:1 barely:1 reason:2 assuming:1 dicarlo:1 cq:2 setup:2 mostly:2 hog:2 implementation:4 gallant:1 neuron:13 convolution:1 datasets:2 observation:2 theoritic:1 precise:1 ucsd:2 ggd:3 introduced:1 pair:2 extensive:1 connection:7 optimized:1 california:2 learned:2 tremendous:1 established:2 macaque:1 beyond:1 usually:1 pattern:2 perception:3 appeared:1 confidently:1 saturation:3 geusebroek:1 including:1 reliable:1 royal:1 wainwright:1 natural:15 rely:1 representing:1 improve:2 firmly:1 axis:1 schmid:1 kj:2 prior:3 understanding:1 literature:4 checking:1 review:1 multiplication:1 determining:1 fully:1 interesting:2 filtering:2 remarkable:1 consistent:2 principle:2 bank:2 normalizes:1 eccv:1 caltech101:2 last:1 formal:4 neighbor:2 face:1 absolute:1 sparse:6 benefit:2 van:1 curve:2 default:1 dimension:1 world:1 cortical:2 computes:1 qn:1 author:1 collection:1 sensory:1 san:2 emphasize:1 implicitly:1 preferred:1 hubel:1 summing:1 assumed:1 xi:32 morrone:1 discriminative:1 decade:3 why:1 channel:28 ca:2 complex:6 main:5 motivation:1 noise:1 hyperparameters:1 knill:1 repeated:1 body:1 neuronal:1 screen:1 heeger:2 bandpass:2 perceptual:1 hmax:13 sift:21 trademark:2 svm:3 normalizing:1 evidence:1 essential:1 intractable:1 consist:1 quantization:1 adding:1 importance:5 texture:1 magnitude:1 illumination:1 illustrates:1 conditioned:1 sparseness:5 depicted:1 simply:2 likely:1 gao:1 neurophysiological:8 visual:26 saturating:2 scalar:1 pinto:2 corresponds:2 wolf:1 determines:3 relies:2 complemented:1 conditional:5 goal:2 formulated:1 hard:1 determined:2 averaging:1 ece:1 divisive:6 la:2 invariance:3 experimental:2 attempted:1 decisiontheoretic:1 berg:1 latter:3 modulated:1 evaluate:2
3,293
3,983
Fractionally Predictive Spiking Neurons Jaldert O. Rombouts CWI, Life Sciences Amsterdam, The Netherlands [email protected] Sander M. Bohte CWI, Life Sciences Amsterdam, The Netherlands [email protected] Abstract Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of powerlaw kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spiketrains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. 1 Introduction A key issue in computational neuroscience is the interpretation of neural signaling, as expressed by a neuron?s sequence of action potentials. An emerging notion is that neurons may in fact encode information at multiple timescales simultaneously [1, 2, 3, 4]: the precise timing of spikes may be conveying high-frequency information, and slower measures, like the rate of spiking, may be relating low-frequency information. Such multi-timescale encoding comes naturally, at least for sensory neurons, as the statistics of the outside world often exhibit self-similar multi-timescale features [5] and the magnitude of natural signals can extend over several orders. Since neurons are limited in the rate and resolution with which they can emit spikes, the mapping of large dynamic-range signals into spike-trains is an integral part of attempts at understanding neural coding. Experiments have extensively demonstrated that neurons adapt their response when facing persistent changes in signal magnitude. Typically, adaptation changes the relation between the magnitude of the signal and the neuron?s discharge rate. Since adaptation thus naturally relates to neural coding, it has been extensively scrutinized [6, 7, 8]. Importantly, adaptation is found to additionally exhibit features like dynamic gain control, when the standard deviation but not the mean of the signal changes [1], and long-range time-dependent changes in the spike-rate response are found in response to large magnitude signal steps, with the changes following a power-law decay (e.g. [9]). Tying the notions of self-similar multi-scale natural signals and adaptive neural coding together, it has recently been suggested that neuronal adaptation allows neuronal spiking to communicate a fractional derivative of the actual computed signal [10, 4]. Fractional derivatives are a generalization of standard ?integer? derivatives (?first order?, ?second order?), to real valued derivatives (e.g. ?0.5th order?). A key feature of such derivatives is that they are non-local, and rather convey information over essentially a large part of the signal spectrum [10]. 1 Here, we show how neural spikes can encode temporal signals when the spike-train itself is taken as the fractional derivative of the signal. We show that this is the case for a signal approximated by a sum of shifted power-law kernels starting at respective times ti and decaying proportional to 1/(t ? ti )? . Then, the fractional derivative of this approximated signal corresponds to a sum of spikes at times ti , provided that the order of fractional differentiation ? is equal to 1 ? ?: a spiketrain is the ? = 0.2 fractional derivative of a signal approximated by a sum of power-law kernels with exponent ? = 0.8. Such signal encoding with power-law kernels can be carried out for example with simple standard thresholding spiking neurons with a refractory reset following a power-law. As fractional derivatives contain information over many time-ranges, they are naturally suited for predicting signals. This links to notions of predictive coding, where neurons communicate deviations from expected signals rather than the signal itself. Predictive coding has been suggested as a key feature of neuronal processing in e.g. the retina [11]. For self-similar scale-free signals, future signals may be influenced by past signals over very extended time-ranges: so-called longmemory. For example, fractional Brownian motion (fBm) can exhibit long-memory, depending on their Hurst-parameter H. For H > 0.5 fBM models which exhibit long-range dependence (longmemory) where the autocorrelation-function follows a power-law decay [12]. The long-memory nature of signals approximated with sums of power-law kernels naturally extends this signal approximation into the future along the autocorrelation of the signal, at least for self-similar 1/f ? like signals. The key ?predictive? assumption we make is that a neuron?s spike-train up to time t contains all the information that the past signal contributes to the future signal t0 > t. The correspondence between a spike-train as a fractional derivative and a signal approximated as a sum of power-law kernels is only exact when spike-trains are taken as a sum of Dirac-? functions and the power-law kernels as 1/t? . As both responses are singular, neurons would only be able to approximate this. We show empirically how sums of (approximated) 1/t? power-law kernels can accurately approximate long-memory fBm signals via simple difference thresholding, in an online greedy fashion. Thus encodings signals, we show that the power-law kernels approximate synthesized signals with about half the number of spikes to obtain the same Signal-to-Noise-Ratio, when compared to the same encoding method using similar but exponentially decaying kernels. We further demonstrate the approximation of sine wave modulated white-noise signals with sums of power-law kernels. The resulting spike-trains, expressed as ?instantaneous spike-rate?, exhibit the phase-presession as in [4], with suppression of activity on the ?back? of the sine-wave modulation, and stronger suppression for lower values of the power-law exponent (corresponding to a higher order for our fractional derivative). We find the effect is stronger when encoding the actual sine wave envelope, mimicking the difference between thalamic and cortical neurons reported in [4]. This may suggest that these cortical neurons are more concerned with encoding the sine wave envelope. The power-law approximation also allows for the transparent and straightforward implementation of temporal signal filtering by a post-synaptic, receiving neuron. Since neural decoding by a receiving neuron corresponds to adding a power-law kernel for each received spike, modifying this receiving power-law kernel then corresponds to a temporal filtering operation, effectively exploiting the wide-spectrum nature of power-law kernels. This is particularly relevant, since, as has been amply noted [9, 14], power-law dynamics can be closely approximated by a weighted sum or cascade of exponential kernels. Temporal filtering would then correspond to simply tuning the weights for this sum or cascade. We illustrate this notion with an encoding/decoding example for both a high-pass and low-pass filter. 2 Power-law Signal Encoding Neural processing can often be reduced to a Linear-Non-Linear (LNL) filtering operation on incoming signals [15] (figure 1), where inputs are linearly weighted and then passed through a non-linearity to yield the neural activation. As this computation yields analog activations, and neurons communicate through spikes, the additional problem faced by spiking neurons is to decode the incoming signal and then encode the computed LNL filter again into a spike-train. The standard spiking neuron model is that of Linear-Nonlinear-Poisson spiking, where spikes have a stochastic relationship to the computed activation [16]. Here, we interpret the spike encoding and decoding in the light of processing and communicating signals with fractional derivatives [10]. At least for signals with mainly (relatively) high-frequency components, it has been well established that a neural signal can be decoded with high fidelity by associating a fixed kernel with each spike, 2 Neuron i ? D x1(t) x1(t) ? ? n xi (t) ? D xn(t) xn(t) D xi (t) ?LNL? Figure 1: Linear-Non-Linear filter, with spike-decoding front-end and spike-encoding back-end. and summing these kernels [17]; keeping track of doublets and triplet spikes allows for even greater fidelity. This approach however only worked for signals with a frequency response lacking low frequencies [17]. Low-frequency changes lead to ?adaptation?, where the kernel is adapted to fit the signal again [18]. For long-range predictive coding, the absence of low frequencies leaves little to predict, as the effective correlation time of the signals is then typically very short as well [17]. Using the notion of predictive coding in the context of (possible) long-range dependencies, we define the goal of signal encoding as follows: let a signal xj (t) be the result of the continuous-time computation in neuron j up to time t, and let neuron j have emitted spikes tj up to time t. These spikes should be emitted such that the signal xj (t0 ) for t0 < t is decoded up to some signal-to-noise ratio, and these spikes should be predictive for xj (t0 ) for t0 > t in the sense that no additional spikes are needed at times t0 > t to convey the predictive information up to time t. Taking kernels as a signal filter of fixed width, as in the general approach in [17] has the important drawback that the signal reconstruction incurs a delay for the duration of the filter: its detection cannot be communicated until the filter is actually matched to the signal. This is inherent to any backward-looking filter-maching solution. Alternatively, a predictive coding approach could rely on only on a very short backward looking filter, minimizing the delay in the system, and continuously computing a forward predictive signal. At any time in the future then, only deviations of the actual signal from this expectation are communicated. 2.1 Spike-trains as fractional derivative As recent work has highlighted the possibility that neurons encode fractional derivatives, it is noteworthy that the non-local nature of fractional calculus offers a natural framework for predictive coding. In particular, as we will show, when we assume that the predictive information about the future signal is fully contained in the current set of spikes, a signal approximated as a sum of powerlaw kernels corresponds to a fractional derivative in the form of a sum of Dirac-? functions, which the neuron can obviously communicate through timed spikes. The fractional derivative r(t) of a signal x(t) is denoted as D? x(t), and intuitively expresses: r(t) = d? x(t), dt? where ? is the fractional order, e.g. 0.5. This is most conveniently computed through the Fourier transformation in the frequency domain, as a simple multiplication: R(?) = H(?)X(?), where the Fourier-transformed fractional derivative operator H(?) is by definition (i?)? [10], and X(?) and R(?) are the Fourier transforms of x(t) and r(t) respectively. We assume that neurons carry out predictive coding by emitting spikes such that all predictive information is contained in the current spikes, and no more spikes will be fired if the signal follows this prediction. Approximating spikes by Dirac-? functions, we take the spike-train up to some time t0 to be the fractional derivative of the past signal and be fully predictive for the expected influence the 3 Fractionally Predicting Spikes a) x(t) r(t) 0 c) 0.1 0.2 t0 0.3 time (s) Non?singular kernels b) x(t) r(t) ?-exp(?=10ms) 0 0.4 d) Power-law kernel approximation, ? = 0.5 0.1 0.2 t0 time (s) 0.3 0.4 Power?law kernel as sum of exponents k=400 k=50 k=10 0 100 200 300 time (ms) 400 500 0 100 200 300 time (ms) 400 500 Figure 2: a) Signal x(t) and corresponding fractional derivative r(t): 1/t? power-laws and deltafunctions; b) power-law approximation, timed to spikes; compared to sum of ?-functions (black dashed line). c) Approximated 1/t? power-law kernel for different values of k from eq. (2). d) The approximated 1/t? power-law kernel (blue line) can be decomposed as a weighted sum of ?-functions with various decay time-constants (dashed lines). past signal has on the future signal: r(t) = X ?(t ? ti ) ti <t0 The task is to find a signal x ?(t) that corresponds to an approximation of the actual signal x(t) up to t0 , and where the predicted signal contribution x(t) for t > t0 due to x(t < t0 ) does not require additional future spikes. We note that a sum of power-law decaying kernels with power-law t?? for ? = 1 ? ? corresponds to such a fractional derivative: the Fourier-transform for a power-law decaying kernel of form t?? is proportional to (i?)??1 , hence for a signal that just experienced a single step from 0 to 1 at time t we get: R(?) = (i?)? (i?)??1 , and setting ? = 1 ? ? yields a constant in Fourier-space, which of course is the Fourier-transform of ?(t). It is easy to check that shifted power-law decaying kernels, e.g. (t ? ta )?? correspond to a shifted fractional derivative ?(t ? ta ), and the fractional derivative of a sum of shifted power-law decaying kernels corresponds to a sum of shifted delta-functions. Note that for decaying power-laws, we need ? > 0, and for fractional derivatives we require ? > 0. Thus, with the reverse reasoning, a signal approximated as the sum of power-law decaying kernels corresponds to a spike-train with spikes positioned at the start of the kernel, and, beyond a current time t, this sum of decaying kernels is is interpreted as a prediction of the extent to which the future signal can be predicted by the past signal. Obviously, both the Dirac-? function and the 1/t? kernels are singular (figure 2a) and can only be approximated. For real applications, only some part of the 1/t? curve can be considered, effectively leaving the magnitude of the kernel and the high frequency component (the extend to which the initial 1/t? peak is approximated) as free parameters. Figure 2b illustrates the signal approximated by a random spikes train; as compared to a sum of exponentially decaying ?-kernels, the longmemory effects of power-law decay kernels is evident. 4 2.2 Practical encoding To explore the efficacy of the power-law kernel approach to signal encoding/decoding, we take a standard thresholding online approximation approach, where neurons communicate only deviations between the current computed signal x(t) and the emitted approximated signal x ?(t) exceeding some threshold ?. The emitted signal x ?(t) is constructed as the (delayed) sum of filter kernels ? each starting at the time of the emitted spike: X x ?(t) = ?(t ? (tj + ?)), tj <t the delay ? corresponds to the time-window over which the neuron considers the difference between computed and emitted signal. In a spiking neuron, such computation would be implemented simply by for instance a refractory current following a power-law. Allowing for both positive and negative spikes (corresponding to tightly coupled neurons with reversed threshold polarity [17]), this would expand to: X X x ?(t) = ?(t ? (t+ ?(t ? (t? j + ?)) ? j + ?)). t? j <t t+ j <t Considering just the fixed time-window thresholding approach, a spike is emitted each time the difference between the computed signal x(t) and the emitted signal x ?(t) plus (or minus) the kernel ?(t) summed over some time-window exceeds the threshold ?: r(t0 ) = ?(t0 ) if t0 X |x(? ) ? x ?(? )| ? |x(? ) ? (? x(? ) + ?(? ))|) > ?, ? =t0 ?? = ??(t0 ) if t0 X |x(? ) ? x ?(? )| ? |x(? ) ? (? x(? ) ? ?(? ))|) > ?, (1) ? =t0 ?? the signal approximation improvement is computed here as the absolute value of the difference between the current signal noise and the signal noise when a kernel is added (or subtracted). As an approximation of 1/t? power-law kernels, we let the kernel first quickly rise, and then decay according to the power-law. For a practical implementation, we use a 1/t? signal multiplied by a modified version of the logistic sigmoid function logsig(t) = 1/(1 + exp(?t)): v(t, k) = 2 logsig(kt) ? 1, such that the kernel becomes: ?(t) = ?v(t, k)1/t? , (2) 0 where ?(t) is zero for t < t, and parameter k determines the angle of the initial increasing part of the kernel. The resulting kernel is further scaled by a factor ? to achieve a certain signal approximation precision (kernels for power-law exponential ? = 0.5 and several values of k are shown in figure 2c). As an aside, the resulting (normalized) power-law kernel can very accurately be approximated over multiple orders of magnitude by a sum of just 11 ?-function exponentials (figure 2d). Next, we compare the efficiency of signal approximation with power-law predictive kernels as compared to the same approximation using standard fixed kernels. For this, we synthesize self-similar signals with long-range dependencies. We first remark on some properties of self-similar signals with power-law statistics, and on how to synthesize them. 2.3 Self-similar signals with power-law statistics There is extensive literature on the synthesis of statistically self-similar signals with 1/f -like statistics, at least going back to Kolmogorov [19] and Mandelbrot [20]. Self-similar signals exhibit slowly decaying variances, long-range dependencies and a spectral density following a power law. Importantly, for wide-sense self-similar signals, the autocorrelation functions also decays following a power-law. Although various distinct classes of self-similar signals with 1/f -like statistics exist [12], fractional Brownian motion (fBm) is a popular model for many natural signals. Fractional Brownian motion is characterized by its Hurst-paramater H, where H = 0.5 corresponds to regular Brownian motion, and fBM models with H > 0.5 exhibit long-range (positive) dependence. The spectral density of an fBm signal is proportional to a power-law, 1/f ? , where ? = 2H + 1. We used fractional Brownian motion to generate self-similar signals for various H values, using the wfbm function from the Matlab wavelet toolbox. 5 250 1 200 0 0 24 500 time (ms) H=0.6 H=0.75 H=0.9 H=0.6, 75% exp. H=. 6 26 SNR (? ?) signal 150 28 s(t) s(t) approx exp kernel power?law 100 22 20 18 50 16 0 14 ?50 Signal approximation w/ Power?law kernel 0 2 4 6 8 time (s) 10 12 14 16 12 SNR for different H?factors for mean spikes/s rate of 48Hz 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 ? Figure 3: Left: example of encoding of fBm signal with power-law kernels. Using an exponentially decaying kernel (inset) required 1398 spikes vs. 618 for the power-law kernel (k = 50), for the same SNR. Right: SNR for various ? power-law exponents using a fixed number of spikes (48Hz), with curves for different H-parameters, each curve averaged over five 16s signals. The dashed blue curve plots the H = 0.6 curve, using less spikes (36Hz); the flat bottom dotted line shows the average performance of the non-power-law exponentially decaying kernel, also for H = 0.6. 3 3.1 Signal encoding/decoding Encoding long-memory self-similar signals We applied the thresholded kernel approximation outlined above to synthesized fBm signals with H > 0.5, to ensure long-term dependence in the signal. An example of such encoding is given in figure 3, left panel, using both positive and negative spikes, (inset, red line: the power-law kernel used). When encoding the same signal with kernels without the power-law tail (inset, blue line), the approximation required more than twice as many spikes for the same Signal-to-Noise-Ratio (SNR). In figure 3, right panel, we compared the encoding efficacy for signals with different H-parameters, as a function of the power-law exponent, using the same number of spikes for each signal (achieved by changing the ? parameter and the threshold ?). We find that more slowly varying signals, corresponding to higher H-parameters, are better encoded by the power-law kernels, More surprisingly, we find and signals are consistently best encoded for low ?-values, in the order of 0.1 ? 0.3. Similar results were obtained for different values of k in equation (2). We should remark that without negative spikes, there is no longer a clear performance advantage for power-law kernels (even for large ?): where power-law kernels are beneficial on the rising part of a signal, they lose on downslopes where their slow decay cannot follow the signal. 3.2 Sine-wave modulated white-noise Fractional derivatives as an interpretation of neuronal firing-rate has been put forward by a series of recent papers [10, 21, 4], where experimental evidence was presented to suggest such an interpretation. A key finding in [4] was that the instantaneous firing rate of neurons along various processing stages of a rat?s whisker movement exhibit a phase-lead relative to the amplitude of the movement modulation. The phase-lead was found to be greater for cortical neurons as compared to thalamic neurons. When the firing rate corresponds to the ?-order fractional derivative, the phase-lead would correspond to greater fractional order ? in the cortical neurons [10] . We used the sumof-power-laws to approximate both the sine-wave-modulated white noise and the actual sine-wave itself, and found similar results (figure 4): smaller power-law exponents, in our interpretation also corresponding to larger fractional derivative orders, lead to increasingly fewer spikes at the back of the sine-wave (both in the case where we encode the signal with both positive and negative spikes ? then counting only the positive spikes ? and when the signal is approximated with only positive spikes ? not shown). We find an increased phase-lead when approximating the actual sine-wave ker6 Approximation white noise with sine wave modulation normalized rate 4 3.5 2.5 3 2 2.5 1.5 2 1 ? = 0.9 ? = 0.5 signal 1.5 1 0 2 4 6 8 10 time(s) 12 Sine-wave approximation 3 ? = 0.9 ? = 0.5 signal 0.5 14 16 0 0 2 4 6 8 time(s) 10 12 14 16 Figure 4: Sinewave phase-lead. Left: when encoding sine-wave modulated white noise (inset); right: encoding the sine-wave signal itself (inset). Average firing rate is computed over 100ms, and normalized to match the sine-wave kernel. 60 high pass filter signal approximation 80 50 40 30 1 power?law kernel as sum of exponents 20 10 60 0 ?10 40 0 10 2 10 time(ms) 0 0 0 100 200 300 time (ms) 400 2 10 500 10 4 10 ?5 10 0 10 5 10 ?20 ?30 ?40 10 20 low pass filter 60 50 40 0 30 20 10 ?20 0 0 10 ?5 10 0 10 freq(Hz) 10 ?40 0 5000 time(ms) 10000 2 10 time(ms) 4 10 ?5 10 0 10 freq(Hz) 5 10 ?10 ?20 ?30 ?40 0 5000 time(ms) 10000 Figure 5: Illustration of frequency filtering with modified decoding kernels. The square boxes show the respective kernels in both time and frequency space. See text for further explanation. nel as opposed to the white-noise modulation, suggesting that perhaps cortical neurons more closely encode the former as compared to thalamic neurons. 3.3 Signal Frequency Filtering For a receiving neuron i to properly interpret a spike-train r(t)j from neuron j, both neurons would need to keep track of past events over extended periods of time: current spikes have to be added to or subtracted from the future expectation signal that was already communicated through past spikes. The required power-law processes can be implemented in various manners, for instance as a weighted sum or a cascade of exponential processes [9, 10]. A natural benefit of implementing power-law kernels as a weighted sum or cascade of exponentials is that a receiving neuron can carry out temporal signal filtering simply by tuning the respective weight parameters for the kernel with which it decodes spikes into a signal approximation. In figure 5, we illustrate this with power-law kernels that are transformed into high-pass and lowpass filters. We first approximated our power-law kernel (2) with a sum of 11 exponentials (depicted in the left-center inset). Using this approximation, we encoded the signal (figure 5, center). The signal was then reconstructed using the resultant spikes, using the power-law kernel approximation, but with some zeroed out exponentials (respectively the slowly decaying exponentials for the highpass filter, and the fast-decaying kernels for the low-pass filter). Figure 5, most right, shows the resulting filtered signal approximations. Obviously, more elaborate tuning of the decoding kernel with a larger sum of kernels can approximate a vast variety of signal filters. 7 4 Discussion Taking advantage of the relationship between power-laws and fractional derivatives, we outlined the peculiar fact that a sum of Dirac-? functions, when taken as a fractional derivative, corresponds to a signal in the form of a sum of power-law kernels. Exploiting the obvious link to spiking neural coding, we showed how a simple thresholding spiking neuron can compute a signal approximation as a sum of power-law kernels; importantly, such a simple thresholding spiking neuron closely fits standard biological spiking neuron models, when the refractory response follows a power-law decay (e.g. [22]). We demonstrated the usefulness of such an approximation when encoding slowly varying signals, finding that encoding with power-law kernels significantly outperformed similar but exponentially decaying kernels that do not take long-range signal dependencies into account. Compared to the work where the firing rate is considered as a fractional derivative, e.g. [10], the present formulation extends the notion of neural coding with fractional derivatives to individual spikes, and hence finer temporal variations: each spike effectively encodes very local signal variations, while also keeping track of long-range variations. The interpretation in [10] of the fractional derivative r(t) as a rate leads to a 1:1 relation between the fractional derivative order and the powerlaw decay exponent of adaptation of about 0.2 [10, 13, 9]. For such fractional derivative ?, our derivation implies a power-law exponent for the power law kernels ? = 1 ? ? ? 0.8, consistent with our sine-wave reconstruction, as well as with recent adapting spiking neuron models [22]. We find that when signals are approximated with non-coupled positive and negative neurons (i.e. one neuron encodes the positive part of the signal, the other the negative), such much faster-decaying power-law kernels encode more efficiently than slower decaying ones. Non-coupled signal encoding obviously fair badly when signals rapidly change polarity; this however seems consistent with human illusory experiences [23]. As noted, the singularity of 1/t? power-law kernels means that initial part of the kernel can only be approximated. Here, we initially focused our simulation on the use of long-range power-law kernels for encoding slowly varying signals. A more detailed approximation of this initial part of the kernel may be needed to incorporate effects like gain modulation [24, 8], and determine up to what extent the power-law kernels already account for this phenomenon. This would also provide a natural link to existing neural models of spike-frequency adaptation, e.g. [25], as they are primarily concerned with modeling the spiking neuron behavior rather than the computational aspects. We used a greedy online thresholding process to determine when a neuron would spike to approximate a signal, this in contrast to offline optimization methods that place spikes at optimal times, like Smith & Lewicki [26]. The key difference of course is that the latter work is concerned with decoding a signal, and in effect attempts to determine the effective neural (temporal) filter. As we aimed to illustrate in the signal filtering example, these notions are not mutually exclusive: a receiving neuron could very well filter the incoming signal with a carefully shaped weighted sum of kernels, and then, when the filter is activated, signal the magnitude of the match through fractional spiking. Predictive coding seeks to find a careful balance between encoding known information as well as future, derived expectations [27]. It does not seem unreasonable to formulate this balance as a nogoing-back problem, where current computations are projected forward in time, and corrected where needed. In terms of spikes, this would correspond to our assumption that, absent new information, no additional spikes need to be fired by a neuron to transmit this forward information. The kernels we find are somewhat in contrast to the kernels found by Bialek et. al. [17], where the optimal filter exhibited both a negative and a positive part and no long-range ?tail?. Several practical issues may contribute to this difference, not least the relative absence of low frequency variations, as well as the fact that the signal considered is derived from the fly?s H1 neurons. These two neurons have only partially overlapping receptive fields, and the separation into positive and negative spikes is thus slightly more intricate. We need to remark though that we see no impediment for the presented signal approximation to be adapted to such situations, or situations where more than two neurons encode fractions of a signal, as in population coding, e.g. [28]. The issue of long-range temporal dependencies as discussed here seems to be relatively unappreciated. Long-range power-law dynamics potentially offer a variety of ?hooks? for computation through time [9], like for temporal difference learning and relative temporal computations (and possibly exploiting spatial and temporal statistical correspondences [29]). Acknowledgement: JOR supported by NWO Grant 612.066.826, SMB partly by NWO Grant 639.021.203. 8 References [1] A.L. Fairhall, G.D. Lewen, W. Bialek, and R.R.R. van Steveninck. Multiple timescales of adaptation in a neural code. In NIPS, volume 13. The MIT Press, 2001. [2] B. Wark, A. Fairhall, and F. Rieke. Timescales of inference in visual adaptation. Neuron, 61(5):750?761, 2009. [3] S. Panzeri, N. Brunel, N.K. Logothetis, and C. Kayser. Sensory neural codes using multiplexed temporal scales. Trends in Neurosciences, page in press, 2010. [4] B.N. Lundstrom, A.L. Fairhall, and M. Maravall. Multiple Timescale Encoding of Slowly Varying Whisker Stimulus Envelope in Cortical and Thalamic Neurons In Vivo. J. of Neurosci, 30(14):50?71, 2010. [5] JH Van Hateren. Processing of natural time series of intensities by the visual system of the blowfly. Vision Research, 37(23):3407?3416, 1997. [6] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26(3):695?702, 2000. [7] B. Wark, B.N. Lundstrom, and A. Fairhall. Sensory adaptation. Current opinion in neurobiology, 17(4):423?429, 2007. [8] M. Famulare and A.L. Fairhall. Feature selection in simple neurons: how coding depends on spiking dynamics. Neural Computation, 22:1?18, 2009. [9] P.J. Drew and LF Abbott. Models and properties of power-law adaptation in neural systems. Journal of neurophysiology, 96(2):826, 2006. [10] B.N. Lundstrom, M.H. Higgs, W.J. Spain, and A.L. Fairhall. Fractional differentiation by neocortical pyramidal neurons. Nature neuroscience, 11(11):1335?1342, 2008. [11] T. Hosoya, S.A. Baccus, and M. Meister. Dynamic predictive coding by the retina. Nature, 436:71?77, 2005. [12] G.W. Wornell. Signal processing with fractals: a wavelet based approach. Prentice Hall, NJ, 1999. [13] Z. Xu, JR Payne, and ME Nelson. Logarithmic time course of sensory adaptation in electrosensory afferent nerve fibers in a weakly electric fish. Journal of neurophysiology, 76(3):2020, 1996. [14] S. Fusi, PJ Drew, and LF Abbott. Cascade models of synaptically stored models. Neuron, 45:1?14, 2005. [15] C.M. Bishop. Neural networks for pattern recognition. Oxford University Press, USA, 1995. [16] EJ Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 12(2):199?213, 2001. [17] F. Rieke, D. Warland, and W. Bialek. Spikes: exploring the neural code. The MIT Press, 1999. [18] A.L. Fairhall, G.D. Lewen, W. Bialek, and R.R.R. van Steveninck. Efficiency and ambiguity in an adaptive neural code. Nature, 412(6849):787?792, 2001. [19] A. Kolmogorov. Wienersche Spiralen und einige andere interessante kurven in Hilbertschen raum. Computes Rendus (Doklady) Academic Sciences USSR (NS), 26:115?118, 1940. [20] B.B. Mandelbrot and J.W. Van Ness. Fractional Brownian motions, fractional noises and applications. SIAM review, 10(4):422?437, 1968. [21] B.N. Lundstrom, M. Famulare, L.B. Sorensen, W.J. Spain, and A.L. Fairhall. Sensitivity of firing rate to input fluctuations depends on time scale separation between fast and slow variables in single neurons. Journal of Computational Neuroscience, 27(2):277?290, 2009. [22] C Pozzorini, R Naud, S Mensi, and W Gerstner. Multiple timescales of adaptation in single neuron models. In Front. Comput. Neurosci.: Bernstein Conference on Computational Neuroscience, 2010. [23] A A Stocker and E P Simoncelli. Visual motion aftereffects arise from a cascade of two isomorphic adaptation mechanisms. J. Vision, 9(9):1?14, 2009. [24] S. Hong, B.N. Lundstrom, and A.L. Fairhall. Intrinsic gain modulation and adaptive neural coding. PLoS Computational Biology, 4(7), 2008. [25] R. Jolivet, A. Rauch, HR Luescher, and W. Gerstner. Integrate-and-Fire models with adaptation are good enough: predicting spike times under random current injection. NIPS, 18:595?602, 2006. [26] E. Smith and M.S. Lewicki. Efficient coding of time-relative structure using spikes. Neural Computation, 17(1):19?45, 2005. [27] N. Tishby, F.C. Pereira, and W. Bialek. The information bottleneck method. Arxiv physics/0004057, 2000. [28] Q.J.M. Huys, R.S. Zemel, R. Natarajan, and P. Dayan. Fast population coding. Neural Computation, 19(2):404?441, 2007. [29] O. Schwartz, A. Hsu, and P. Dayan. Space and time in visual context. Nature Rev. Neurosci., 8(11), 2007. 9
3983 |@word neurophysiology:2 version:1 rising:1 stronger:2 seems:2 calculus:1 simulation:1 seek:1 electrosensory:1 amply:1 incurs:1 minus:1 carry:3 initial:4 contains:1 efficacy:2 series:2 past:7 existing:1 current:10 activation:3 plot:1 aside:1 v:1 half:2 greedy:2 leaf:1 fewer:1 smith:2 short:2 filtered:1 contribute:1 five:1 along:2 constructed:1 mandelbrot:2 persistent:1 autocorrelation:3 manner:1 intricate:1 expected:2 behavior:1 multi:3 decomposed:1 actual:7 little:1 window:3 considering:1 increasing:1 becomes:1 provided:2 spain:2 linearity:1 matched:1 panel:2 maximizes:1 what:1 tying:1 interpreted:2 emerging:1 finding:2 transformation:1 differentiation:2 nj:1 temporal:13 ti:5 doklady:1 scaled:1 schwartz:1 control:1 grant:2 positive:10 timing:1 local:3 encoding:28 oxford:1 firing:7 modulation:6 noteworthy:1 fluctuation:1 black:1 plus:1 twice:1 limited:1 range:16 statistically:1 averaged:1 steveninck:3 huys:1 practical:3 lf:2 communicated:3 kayser:1 signaling:1 cascade:7 significantly:1 adapting:1 regular:1 suggest:2 get:1 cannot:2 selection:1 operator:1 put:1 context:2 influence:1 prentice:1 demonstrated:2 center:2 straightforward:1 starting:2 duration:1 focused:1 resolution:1 formulate:1 powerlaw:3 communicating:1 importantly:3 population:2 rieke:2 notion:7 variation:5 discharge:1 transmit:1 logothetis:1 decode:1 exact:1 smb:1 synthesize:2 trend:1 approximated:22 particularly:1 recognition:1 natarajan:1 bottom:1 fly:1 wornell:1 plo:1 movement:2 und:1 dynamic:6 weakly:1 predictive:18 efficiency:2 lowpass:1 various:6 fiber:1 kolmogorov:2 derivation:1 train:13 distinct:1 fast:3 effective:2 zemel:1 outside:1 encoded:3 larger:2 valued:1 statistic:5 timescale:3 highlighted:1 itself:5 transform:2 online:5 obviously:4 sequence:1 advantage:2 reconstruction:2 reset:1 adaptation:16 relevant:1 payne:1 rapidly:1 fired:2 achieve:1 paramater:1 maravall:1 dirac:5 exploiting:3 transmission:1 depending:1 illustrate:3 received:1 eq:1 implemented:2 predicted:2 come:1 implies:1 closely:3 drawback:1 modifying:1 filter:19 stochastic:1 human:1 opinion:1 implementing:1 require:2 suffices:1 jaldert:1 transparent:2 generalization:1 biological:1 singularity:1 exploring:1 considered:4 hall:1 exp:4 panzeri:1 mapping:1 predict:1 outperformed:1 lose:1 nwo:2 maching:1 weighted:7 mit:2 modified:2 rather:3 ej:1 varying:5 cwi:4 encode:8 derived:2 improvement:1 consistently:1 properly:1 check:1 mainly:1 contrast:2 suppression:2 sense:2 inference:1 dependent:1 dayan:2 typically:3 initially:1 relation:2 expand:1 transformed:2 going:1 mimicking:1 issue:3 fidelity:2 denoted:1 exponent:9 ussr:1 spatial:1 summed:1 ness:1 equal:1 field:1 shaped:1 biology:1 future:10 stimulus:1 inherent:1 primarily:1 retina:2 simultaneously:1 tightly:1 individual:1 delayed:1 phase:6 fire:1 attempt:2 detection:1 possibility:1 nl:2 light:2 activated:1 tj:3 sorensen:1 stocker:1 kt:1 peculiar:1 emit:1 integral:1 experience:1 respective:3 timed:2 instance:2 increased:1 modeling:1 deviation:4 snr:6 usefulness:1 delay:3 front:2 tishby:1 reported:1 stored:1 spiketrain:1 dependency:5 density:2 peak:1 siam:1 sensitivity:1 physic:1 receiving:7 decoding:11 together:1 continuously:1 quickly:1 synthesis:1 again:2 ambiguity:1 opposed:1 slowly:7 possibly:1 derivative:35 rescaling:1 suggesting:1 potential:1 account:2 de:1 coding:19 afferent:1 depends:2 sine:15 h1:1 higgs:1 red:1 wave:15 decaying:19 thalamic:4 start:1 spiketrains:1 vivo:1 contribution:1 square:1 variance:1 efficiently:1 correspond:4 conveying:1 yield:3 decodes:1 accurately:3 finer:1 influenced:1 synaptic:1 definition:1 frequency:14 obvious:1 naturally:4 resultant:1 gain:3 hsu:1 popular:1 illusory:1 mensi:1 fractional:44 amplitude:1 positioned:1 carefully:1 actually:1 back:5 nerve:1 higher:2 dt:1 ta:2 follow:1 response:8 formulation:1 box:1 though:1 just:3 stage:1 correlation:1 until:1 nonlinear:1 overlapping:1 logistic:1 perhaps:1 scrutinized:1 usa:1 effect:4 contain:1 normalized:3 former:1 bohte:2 hence:2 freq:2 white:7 self:14 width:1 noted:2 rat:1 m:10 hong:1 evident:1 neocortical:1 demonstrate:2 motion:7 reasoning:1 instantaneous:2 recently:1 sigmoid:1 spiking:17 empirically:2 refractory:4 exponentially:6 volume:1 extend:2 interpretation:5 analog:1 relating:1 synthesized:2 interpret:2 tail:2 discussed:1 tuning:4 approx:1 outlined:2 longer:1 brownian:6 recent:4 showed:1 reverse:1 certain:1 life:2 additional:4 greater:3 somewhat:1 determine:3 period:1 signal:147 dashed:3 relates:1 multiple:5 simoncelli:1 fbm:8 exceeds:1 match:2 adapt:1 characterized:1 offer:2 long:19 faster:1 academic:1 post:1 doublet:1 prediction:2 essentially:1 expectation:3 poisson:1 vision:2 arxiv:1 kernel:92 achieved:1 synaptically:1 singular:3 leaving:1 pyramidal:1 envelope:3 exhibited:1 hz:5 seem:1 integer:1 emitted:8 hurst:2 counting:1 bernstein:1 sander:1 concerned:3 easy:1 variety:2 xj:3 fit:2 enough:1 associating:1 impediment:1 absent:1 t0:20 bottleneck:1 rauch:1 passed:1 action:1 remark:3 matlab:1 fractal:1 clear:1 detailed:1 aimed:1 netherlands:2 transforms:1 extensively:2 nel:1 induces:1 reduced:1 generate:1 exist:1 shifted:5 dotted:1 fish:1 neuroscience:5 delta:1 track:3 blue:3 express:1 key:6 fractionally:2 threshold:4 changing:1 pj:1 abbott:2 thresholded:1 backward:2 vast:1 fraction:1 sum:36 angle:1 communicate:5 extends:2 place:1 separation:2 fusi:1 correspondence:2 activity:1 badly:1 adapted:2 fairhall:9 worked:1 flat:1 encodes:2 fourier:6 aspect:1 injection:1 relatively:2 according:1 sumof:1 jr:1 beneficial:2 smaller:1 increasingly:1 slightly:1 rev:1 intuitively:1 taken:3 equation:1 mutually:1 aftereffect:1 rendus:1 mechanism:1 needed:3 end:2 meister:1 operation:2 multiplied:1 unreasonable:1 blowfly:1 spectral:2 subtracted:2 slower:2 ensure:1 warland:1 approximating:2 naud:1 added:2 already:2 spike:73 receptive:1 dependence:3 exclusive:1 bialek:6 exhibit:8 rombouts:2 reversed:1 link:3 me:1 wark:2 nelson:1 extent:2 considers:1 code:4 relationship:2 polarity:2 ratio:3 minimizing:1 illustration:1 balance:2 baccus:1 potentially:1 negative:8 rise:1 implementation:2 allowing:1 neuron:61 situation:2 extended:2 looking:2 precise:1 neurobiology:1 highpass:1 intensity:1 required:4 toolbox:1 extensive:1 chichilnisky:1 established:1 jolivet:1 nip:2 able:1 suggested:3 beyond:1 pattern:1 memory:5 explanation:1 power:81 suitable:1 event:1 natural:8 rely:1 predicting:3 hr:1 hook:1 carried:1 coupled:3 faced:1 review:1 understanding:1 literature:1 text:1 acknowledgement:1 multiplication:1 lewen:2 relative:4 law:81 lacking:1 fully:2 whisker:2 filtering:9 proportional:3 facing:1 unappreciated:1 integrate:1 consistent:2 thresholding:8 zeroed:1 course:3 surprisingly:1 supported:1 free:2 keeping:2 offline:1 jh:1 wide:2 taking:2 absolute:1 benefit:1 van:5 curve:5 cortical:6 world:1 xn:2 jor:1 lnl:3 sensory:4 forward:4 computes:1 adaptive:4 projected:1 emitting:1 reconstructed:1 approximate:6 keep:1 incoming:3 summing:1 xi:2 alternatively:1 spectrum:2 continuous:1 triplet:1 additionally:1 nature:7 ruyter:1 contributes:1 gerstner:2 electric:1 domain:1 timescales:4 linearly:1 neurosci:3 noise:13 arise:1 fair:1 convey:2 x1:2 neuronal:5 xu:1 elaborate:1 fashion:1 slow:2 n:1 precision:1 experienced:1 decoded:2 exceeding:1 pereira:1 exponential:9 comput:1 wavelet:2 hosoya:1 bishop:1 inset:6 sinewave:1 decay:9 evidence:1 intrinsic:1 adding:1 effectively:3 drew:2 magnitude:7 illustrates:1 suited:1 depicted:1 logarithmic:1 simply:3 explore:1 visual:4 conveniently:1 amsterdam:2 expressed:2 contained:2 partially:1 lewicki:2 lundstrom:5 brunel:1 corresponds:12 determines:1 goal:1 careful:1 absence:2 brenner:1 change:7 corrected:1 called:1 pas:6 partly:1 experimental:2 isomorphic:1 latter:1 modulated:4 hateren:1 incorporate:1 multiplexed:1 phenomenon:1
3,294
3,984
Fast global convergence of gradient methods for high-dimensional statistical recovery Alekh Agarwal1 Sahand N. Negahban1 Martin J. Wainwright1,2 1 Department of Electrical Engineering and Computer Science and Department of Statistics2 University of California, Berkeley Berkeley, CA 94720-1776 {alekh,sahand n,wainwrig}@eecs.berkeley.edu Abstract Many statistical M -estimators are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizer. We analyze the convergence rates of first-order gradient methods for solving such problems within a high-dimensional framework that allows the data dimension d to grow with (and possibly exceed) the sample size n. This high-dimensional structure precludes the usual global assumptions? namely, strong convexity and smoothness conditions?that underlie classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that Nesterov?s first-order method [12] has a globally geometric rate of convergence up to the statistical precision of the model, meaning the typical Euclidean distance between the true unknown parameter ?? and b This globally linear rate is substantially faster than the optimal solution ?. previous analyses of global convergence for specific methods that yielded only sublinear rates. Our analysis applies to a wide range of M -estimators and statistical models, including sparse linear regression using Lasso (`1 regularized regression), group Lasso, block sparsity, and low-rank matrix recovery using nuclear norm regularization. Overall, this result reveals an interesting connection between statistical precision and computational efficiency in high-dimensional estimation. 1 Introduction High-dimensional data sets present challenges that are both statistical and computational in nature. On the statistical side, recent years have witnessed a flurry of results on consistency and rates for various estimators under high-dimensional scaling, meaning that the data dimension d and other structural parameters are allowed to grow with the sample size n. These results typically involve some assumption regarding the underlying structure of the parameter space, including sparse vectors, low-rank matrices, or structured regression functions, as well as some regularity conditions on the data-generating process. On the computational side, many estimators for statistical recovery are based on solving convex programs. Examples of such M -estimators include `1 -regularized quadratic programming (Lasso), second-order cone programs for sparse non-parametric regression, and semidefinite programming relaxations for low-rank matrix recovery. In parallel, a line of recent work (e.g., [3, 7, 6, 5, 12, 18]) focuses on polynomial-time algorithms for solving these types of convex programs. Several authors [2, 6, 1] have used variants of Nesterov?s accelerated gradient method [12] to obtain algorithms with a global 1 sublinear rate of convergence. For the special case of compressed sensing (sparse regression with incoherent design), some authors have established fast convergence rates in a local sense?once the iterates are close enough to the optimum [3, 5]. Other authors have studied finite convergence of greedy algorithms (e.g., [18]). If an algorithm identifies the support set of the optimal solution, the problem is then effectively reduced to the lower-dimensional subspace, and thus fast convergence can be guaranteed in a local sense. Also in application to compressed sensing, Garg and Khandekar [4] showed that a thresholded gradient algorithm converges rapidly up to some tolerance; we discuss this result in more detail following our Corollary 2 on this special case of sparse linear models. Unfortunately, for general convex programs with only Lipschitz conditions, the best convergence rates in a global sense using first-order methods are sub-linear. Much faster global rates?in particular, at a linear or geometric rate?can be achieved if global regularity conditions like strong convexity and smoothness are imposed [11]. However, a challenging aspect of statistical estimation in high dimensions is that the underlying optimization problems can never be globally strongly convex when d > n in typical cases (since the d ? d Hessian matrix is rank-deficient), and global smoothness conditions cannot hold when d/n ? +?. In this paper, we analyze a simple variant of the composite gradient method due to Nesterov [12] in application to the optimization problems that underlie regularized M estimators. Our main contribution is to establish a form of global geometric convergence for this algorithm that holds for a broad class of high-dimensional statistical problems. We do so by leveraging the notion of restricted strong convexity, used in recent work by Negahban et al. [8] to derive various bounds on the statistical error in high-dimensional estimation. Our analysis consists of two parts. We first establish that for optimization problems underlying such M -estimators, appropriately modified notions of restricted strong convexity (RSC) and smoothness (RSM) suffice to establish global linear convergence of a first-order method. Our second contribution is to prove that for the iterates generated by our firstorder method, these RSC/RSM assumptions do indeed hold with high probability for a broad class of statistical models, among them sparse linear regression, group-sparse regression, matrix completion, and estimation in generalized linear models. We note in passing that our notion of RSC is related to but slightly different than its previous use for bounding statistical error [8], and hence we cannot use these existing results directly. An interesting aspect of our results is that we establish global geometric convergence only up to the statistical precision of the problem, meaning the typical Euclidean distance k?b ? ?? k between the true parameter ?? and the estimate ?b obtained by solving the optimization problem. Note that this is very natural from the statistical perspective, since it is the true parameter ?? itself (as opposed to the solution ?b of the M -estimator) that is of primary interest, and our analysis allows us to approach it as close as is statistically possible. Overall, our results reveal an interesting connection between the statistical and computational properties of M -estimators?that is, the properties of the underlying statistical model that make it favorable for estimation also render it more amenable to optimization procedures. The remainder of the paper is organized as follows. In the following section, we give a precise description of the M -estimators considered here, provide definitions of restricted strong convexity and smoothness, and their link to the notion of statistical precision. Section 3 gives a statement of our main result, as well as its corollaries when specialized to various statistical models. Section 4 provides some simulation results that confirm the accuracy of our theoretical predictions. Due to space constraints, we refer the reader to the full-length version of our paper for technical details. 2 Problem formulation and optimization algorithm In this section, we begin by describing the class of regularized M -estimators to which our analysis applies, as well as the optimization algorithms that we analyze. Finally, we describe the assumptions that underlie our main result. 2 A class of regularized M -estimators: Given a random variable Z ? P taking values in some set Z, let Z1n = {Z1 , . . . , Zn } be a collection of n observations drawn i.i.d. from P. Assuming that P lies within some indexed family {P? , ? ? ?}, the goal is to recover an estimate of the unknown true parameter ?? ? ? generating the data. In order to do so, we consider the regularized M -estimator  ?b?n ? arg min L(?; Z1n ) + ?n R(?) , (1) ??? n where L : ?? Z 7? R is a loss function, and R : ? 7? R+ is a non-negative regularizer on the parameter space. Throughout this paper, we assume that the loss function L is convex and differentiable, and that the regularizer R is a norm. In order to assess the quality of an estimate, we measure the error k?b?n ? ?? k in some norm induced by an inner product h?, ?i on the parameter space. Typical choices are the standard Euclidean inner product and `2 -norm for vectors; the trace inner product and the Frobenius norm for matrices; and the L2 (P) inner product and norm for non-parametric regression. As described in more detail in Section 3.2, a variety of estimators?among them the Lasso, structured non-parametric regression in RKHS, and low-rank matrix recovery?can be cast in this form (1). When the data Z1n are clear from the context, we frequently use the shorthand L(?) for L(?; Z1n ). Composite objective minimization: In general, we expect the loss function L to be differentiable, while the regularizer R can be non-differentiable. Nesterov [12] proposed a simple first-order method to exploit this type of structure, and our focus is a slight variant of this procedure. In particular, given some initialization ?0 ? ?, consider the update  ?u ?t+1 = arg min h?L(?t ), ?i + ?n R(?) + k? ? ?t k22 , for t = 0, 1, 2, . . ., (2) ??BR (?) 2 where ?u > 0 is a parameter related to the smoothness of the loss function, and  BR (?) := ? ? ? | R(?) ? ? (3) is the ball of radius ? in the norm defined by the regularizer. The only difference from Nesterov?s method is the additional constraint ? ? BR (?), which is required for control of early iterates in the high-dimensional setting. Parts of our theory apply to arbitrary choices of the radius ?; for obtaining results that are statistically order-optimal, a setting ? = ?(R(?? )) with ?? ? BR (?) is sufficient, so that fairly conservative upper bounds on R(?? ) are adequate. Structural conditions in high dimensions: It is known that under global smoothness and strong convexity assumptions, the procedure (2) enjoys a globally geometric convergence b = O(?t ) for all iterations rate, meaning that there is some ? ? (0, 1) such that k?t ? ?k t = 0, 1, 2, . . . (e.g., see Theorem 5 in Nesterov [12]). Unfortunately, in the high-dimensional setting (d > n), it is usually impossible to guarantee strong convexity of the problem (1) in a global sense. For instance, when the data is drawn i.i.d., the loss function consists of a sum of n terms. The resulting d ? d Hessian matrix ?2 L(?; Z1n ) is often a sum of n rank-1 terms and hence rank-degenerate whenever n < d. However, as we show in this paper, in order to obtain fast convergence rates for an optimization method, it is sufficient that (a) the objective is strongly convex and smooth in a restricted set of directions, and (b) the algorithm approaches the optimum ?b only along these directions. Let us now formalize this intuition. Consider the first-order Taylor series expansion of the loss function around the point ?0 in the direction of ?: TL (?; ?0 ) := L(?) ? L(?0 ) ? h?L(?0 ), ? ? ?0 i. (4) Definition 1 (Restricted strong convexity (RSC)). We say the loss function L satisfies the RSC condition with strictly positive parameters (?` , ?` , ?) if ?` TL (?; ?0 ) ? k? ? ?0 k2 ? ?` ? 2 for all ?, ?0 ? BR (?). (5) 2 3 In order to gain intuition for this definition, first consider the degenerate setting ? = ?` = 0. In this case, imposing the condition (5) for all ? ? ? is equivalent to the usual definition of strong convexity on the optimization set. In contrast, when the pair (?, ?` ) are strictly positive, the condition (5) only applies to a limited set of vectors. In particular, when ?0 is b and we assume that ? belongs to the set set equal to the optimum ?,  b 2 ? 4?` ? 2 , C := BR (?) ? ? ? ? | k? ? ?k ?` b ? ?` k? ? ?k b 2 for all ? ? C. Thus, for any feasible ? then condition (5) implies that TL (?; ?) 4 b we are guaranteed strong convexity in the direction that is not too close to the optimum ?, b ? ? ?. We now specify an analogous notion of restricted smoothness: Definition 2 (Restricted smoothness (RSM)). We say the loss function L satisfies the RSM condition with strictly positive parameters (?u , ?u , ?) if b ? ?u k? ? ?k b 2 + ?u ? 2 TL (?; ?) for all ? ? BR (?). (6) 2 Note that the tolerance parameter ? is the same as that in the definition (5). The additional term ?u ? 2 is not present in analogous smoothness conditions in the optimization literature, but it is essential in our set-up. Loss functions and statistical precision: In order for these definitions to be sensible and of practical interest, it remains to clarify two issues. First, for what types of loss function and regularization pairs can we expect RSC/RSM to hold? Second, what is the smallest tolerance ? with which they can hold? Past work by Negahban et al. [8] has introduced the class of decomposable regularizers; it includes various regularizers frequently used in M -estimation, among them `1 -norm regularization, block-sparse regularization, nuclear norm regularization, and various combinations of such norms. Negahban et al. [8] showed that versions of RSC with respect to ?? hold for suitable loss functions combined with a decomposable regularizer. The definition of RSC given here is related but slightly different: instead of control in a neighborhood of the true parameter ?? , we need control over the iterb Nonetheless, it can be also be shown that ates of an algorithm approaching the optimum ?. our form of RSC (and also RSM) holds with high probability for decomposable regularizers, and this fact underlies the corollaries stated in Section 3.2. With regards to the choice of tolerance parameter ?, as our results will clarify, it makes little sense to be concerned with choices that are substantially smaller than the statistical precision of the model. There are various ways in which statistical precision can be defined; one natural one is 2stat := E[k?b?n ? ?? k2 ], where the expectation is taken over the randomness in the data-dependent loss function.1 The statistical precision of various M -estimators under high-dimensional scaling are now relatively well understood, and in the sequel, we will encounter various models for which the RSM/RSC conditions hold with tolerance equal to the statistical precision. 3 Global geometric convergence and its consequences In this section, we first state the main result of our paper, and discuss some of its consequences. We illustrate its application to several statistical models in Section 3.2. 1 As written, statistical precision also depends on the choice of ?n , but our theory will involve specific choices of ?n that are order-optimal. 4 3.1 Guarantee of geometric convergence Recall that ?b?n denotes any optimal solution to the problem (1). Our main theorem guarantees that if the RSC/RSM conditions hold with tolerance ?, then Algorithm (2) is guaranteed to have a geometric rate of convergence up to this tolerance. The theorem statement involves the objective function ?(?) = L(?) + ?n R(?). Theorem 1 (Geometric convergence). Suppose that the loss function satisfies conditions (RSC) and (RSM) with a tolerance ? and parameters (?` , ?u , ?` , ?u ). Then the sequence {?t }? t=0 generated by the updates (2) satisfies  t b 2 ? c 0 1 ? ?` k?t ? ?k + c1 ? 2 for all t = 0, 1, 2, . . . (7) 4?u  b ?)) u 4?` ?` , and c1 := 8? where c0 := 2 (?(0)??( ?` ?u + ? u . ?2 ` Remarks: Note that the bound (7) consists of two terms: the first term decays exponen?` tially fast with the contraction coefficient ? := 1 ? 4? . The second term is an additive u b 2 = O(? 2 ). Thus, offset, which becomes relevant only for t large enough such that k?t ? ?k the result guarantees a globally geometric rate of convergence up to the tolerance parameter ?. Previous work has focused primarily on the case of sparse linear regression. For this special case, certain methods are known to be globally convergent at sublinear rates (e.g., [2]), meaning of the type O(1/t2 ). The geometric rate of convergence guaranteed by Theorem 1 is exponentially faster. Other work on sparse regression [3, 5] has provided geometric rates of convergence that hold once the iterates are close to the optimum. In contrast, Theorem 1 b guarantees geometric convergence if the iterates are not too close to the optimum ?. In Section 3.2, we describe a number of concrete models for which the (RSC) and (RSM) conditions hold with ?  stat , which leads to the following result. Corollary 1. Suppose that the loss function satisfies conditions (RSC) and (RSM) with tolerance ? = O(stat ) and parameters (?` , ?u , ?` , ?u ). Then  log(1/stat ) T =O (8) log(4?u /(4?u ? ?` )) steps of the updates (2) ensures that k?T ? ?? k2 = O(2stat ). In the setting of statistical recovery, since the true parameter ?? is of primary interest, there is little point to optimizing to a tolerance beyond the statistical precision. To the best of our knowledge, this result?where fast convergence happens when the optimization error is larger than statistical precision?is the first of its type, and makes for an interesting contrast with other local convergence results. 3.2 Consequences for specific statistical models We now consider the consequences of Theorem 1 for some specific statistical models. In contrast to the previous deterministic results, these corollaries hold with high probability. Sparse linear regression: First, we consider the case of sparse least-squares regression. Given an unknown regression vector ?? ? Rd , suppose that we make n i.i.d. observations of the form yi = hxi , ?? i + wi , where wi is zero-mean noise. For this model, each observation is of the form Zi = (xi , yi ) ? Rd ? R. In a variety of applications, it is natural to assume that ?? is sparse. For a parameter q ? [0, 1] and radius Rq > 0, let us define the `q ?ball?  d Bq (Rq ) := ? ? R | d X j=1 |?j |q ? Rq . (9) Note that q = 0 corresponds to the case of ?hard sparsity?, for which any vector ? ? B0 (R0 ) is supported on a set of cardinality at most R0 . For q ? (0, 1], membership in Bq (Rq ) enforces a decay rate on the ordered coefficients, thereby modelling approximate sparsity. 5 In order to estimate the unknown regression vector ?? ? Bq (R usual Pqn), we consider the 1 n 2 Lasso program, with the quadratic loss function L(?; Z1 ) := 2n i=1 (yi ? hxi , ?i) and the `1 -norm regularizer R(?) := k?k1 . We consider the Lasso in application to a random design model, in which each predictor vector xi ? N (0, ?); we assume that maxj=1,...,d ?jj ? 1 for standardization, and that the condition number ?(?) is finite. Corollary 2 (Sparse vector recovery). Suppose that the observation noise wi is zero-mean and sub-Gaussian with parameter ?, and ?? ? Bq (Rq ), and we use the Lasso program with q ?n = 2? logn d . Then there are universal positive constants ci , i = 0, 1, 2, 3 such that with  probability at least 1 ? exp(?c3 n?2n ), the iterates (2) with ?2 = ? ? 2 Rq ( logn d )q/2 satisfy   t 1?q/2 c2 log d 2 2 b k? ? ?k2 ? c0 1 ? + c1 ? Rq ?(?) n {z } | t for all t = 0, 1, 2, . . .. (10) 2stat It is worth noting that the form of statistical error stat given in bound (10) is known to be minimax optimal up to constant factors [13]. In related work, Garg and Khandekar [4] showed that for the special case of design matrices that satisfy the restricted isometry property (RIP), a thresholded gradient method has geometric convergence up to the tolerance ? kwk2 / n ? ?. However, this tolerance is independent of sample size, and far larger the statistical error stat if n > log d; moreover, severe conditions like RIP are not needed to ensure fast convergence. In particular, Corollary 2 guarantees guarantees geometric convergence up to stat for many random matrices that violate RIP. The proof of Corollary 2 involves exploiting some random matrix theory results [14] in order to verify that the RSC/RSM conditions hold with high probability (see the full-length version for details). Matrix regression with rank constraints: For a pair of matrices A, B ? Rm?m , we use hhA, Bii = trace(AT B) to denote the trace inner product. Suppose that we are given n i.i.d. observations of the form yi = hhXi , ?? ii + wi , where wi is zero-mean noise with variance ? 2 , and Xi ? Rm?m is an observation matrix. The parameter space is ? = Rm?m and each observation is of the form Zi = (Xi , yi ) ? Rm?m ? R. In many contexts, it is natural to assume that ?? is exactly or approximately low rank; applications include collaborative filtering and matrix completion [7, 15], compressed sensing [16], and multitask learning [19, 10, 17]. In order to model such behavior, we let ?(?? ) ? Rm denote the vector of singular values of ?? (padded with zeros as necessary), and impose the constraint ?(??P ) ? Bq (Rq ). We then n 1 2 consider the M -estimator based on the quadratic loss L(?; Z1n ) = 2n i=1 (yi ? hhXi , ?ii) combined with the nuclear norm R(?) = k?(?)k1 as the regularizer. Various problems can be cast within this framework of matrix regression: ? Matrix completion: In this case, observation yi is a noisy version of a randomly selected entry ??a(i),b(i) of the unknown matrix. It is a special case with Xi = Ea(i)b(i) , the matrix with one in position (a(i), b(i)) and zeros elsewhere. ? Compressed sensing: In this case, the observation matrices Xi are dense, drawn from some random ensemble, with the simplest being Xi ? Rm?m with i.i.d. N (0, 1) entries. ? Multitask regression: In this case, the matrix ?? is likely to be non-square, with the column size m2 corresponding to the dimension of the response variable, and m1 to the number of predictors. Imposing a low-rank constraint on ?? is equivalent to requiring that the regression vectors (or columns of the matrix) lie close to a lower-dimensional subspace. See the papers [10, 17] for more details on reformulating this problem as an instance of matrix regression. For each of these problems, it is possible to show that suitable forms of the RSC/RSM conditions will hold with high probability. For the case of matrix completion, the paper [9] establishes a form of RSC useful for controlling statistical error; this argument can be suitably modified to establish related notions of RSC/RSM required for ensuring fast algorithmic convergence. Similar statements apply to the settings of compressed sensing and multi-task 6 regression. For these matrix regression problems, consider the statistical precision ?  1?q/2 ? ?Rq m log m for matrix completion n 2mat   1?q/2 ? ?Rq m otherwise, n rates that (up to logarithmic factors) are known to be minimax-optimal [9, 17]. Asq dictated m by this statistical theory, the regularization parameter should be chosen as ?n = c? m log n pm for matrix completion, and ?n = c? n otherwise, where c > 0 is a universal positive constant. The following result applies to matrix regression problems for which the RSC/RSM conditions hold with tolerance ? = stat . Corollary 3 (Low-rank matrix recovery). Suppose that ?(?? ) ? Bq (Rq ), and the observation noise is zero-mean ?-sub-Gaussian. Then there are universal positive constants c1 , c2 , c3  mat 2 such that with probability at least 1?exp(?c3 n?n ), the iterates (2) with ? = ? ?n satisfy |||?t ? ?? |||2F ? c0 ? t + c1 2mat for all t = 0, 1, 2, . . .. Here the contraction coefficient ? ? (0, 1) is a universal constant, independent of (n, m, Rq ), depending on the parameters (?` , ?u ). We refer the reader to the full-length version for specific form taken for different variants of matrix regression. 4 Simulations In this section, we provide some experimental results that confirm the accuracy of our theoretical predictions. In particular, these results verify the predicted linear rates of convergence under the conditions of Corollaries 2 and 3. Sparse regression: We consider a random ensemble of problems, in which each design vector xi ? Rd is generated i.i.d. according to the recursion x(1) = z1 and x(j) = zj + ?xi (j ? 1) for j = 2, . . . , d, where the zj are N (0, 1), and ? ? [0, 1) is a correlation parameter. The singular values of the resulting covariance matrix ? satisfy the bounds ?min (?) ? 1/(1 + ?)2 and ?max (?) ? (1??)22 (1+?) . Note that ? has a finite condition number for all ? ? [0, 1); for ? = 0, it is the identity, but it becomes ill-conditioned as ? ? 1. We recall that in this setting yi = hxi , ?? i + wi where wi ? N (0, 1) and ?? ? Bq (Rq ). We study the convergence properties for sample sizes n = ?s log d using different values of ?. We note that the per iteration cost of our algorithm is n ? d. All our results are averaged over 10 random trials. Our first experiment is based on taking the correlation parameter ? = 0, and the `q -ball parameter q = 0, corresponding to exact sparsity. We then measure convergence rates for ? ? {1, 1.25, 5, 25} with d = 40000 and s = (log d)2 . As shown in Figure 1(a), the procedure fails to converge for ? = 1: with this setting, the sample size n is too small for conditions (RSC) and (RSM) to hold, so that a constant step size leads to oscillations without these conditions. For ? sufficiently large to ensure RSC/RSM, we observe a geometric convergence b 2 , and the convergence rate is faster for ? = 25 compared to ? = 5, of the error k?t ? ?k since the RSC/RSM constants are better with larger sample size. On the other hand, we expect the convergence rates to be slower when the condition number of ? is worse; in addition to address this issue, we ran the same set of experiments with the correlation parameter ? = 0.5. As shown in Figure 1(b), in sharp contrast to the case ? = 0, we no longer observe geometric convergence for ? = 1.25, since the conditioning of ? with ? = 0.5 is much poorer than with the identity matrix. Finally, we also expect optimization to be harder as the sparsity parameter q ? [0, 1] is increase away from zero. For larger q, larger sample sizes are required to verify the RSC/RSM conditions. Figure 1(c) shows that even with ? = 0, setting ? = 5 is required for geometric convergence. Low-rank matrices: We also performed experiments with two different versions of lowrank matrix regression, each time with m2 = 1602 . The first setting is a version of compressed sensing with matrices Xi ? R160?160 with i.i.d. N (0, 1) entries, and we set q = 0, 7 log error vs. iterations log error vs. iterations 2 2 0 0 ?4 ?=1 ? =1.25 ?=5 ? = 25 ?6 ?8 0 50 100 Iterations 150 ?4 ?6 ?8 200 ? 2) log(k?t ? ?k ?2 0 ?2 ? 2) log(k?t ? ?k ? 2) log(k?t ? ?k log error vs. iterations 2 ?=1 ? =1.25 ?=5 ? = 25 ?10 0 ?2 ?4 ?=1 ? =1.25 ?=5 ? = 25 ?6 50 100 Iterations (a) 150 ?8 0 200 50 100 Iterations (b) 150 (c) b 2 ) in the sparse linear Figure 1. Plot of the log of the optimization error log(k? ? ?k regression problem. In this problem, d = 40000, s = (log d)2 , n = ?s log d. Plot (a) shows convergence for the exact sparse case with q = 0 and ? = I (i.e. ? = 0). In panel (b), we observe how convergence rates change for a non-identity covariance with ? = 0.5. Finally plot (c) shows the convergence rates when ? = 0, q = 1. t and formed a matrix ?? with rank R0 = dlog me. We then performed a series of trials with sample size n = ?R0 m, with the parameter ? ? {1, 5, 25}. The per iteration cost in this case is n ? m2 . As seen in Figure 2(a), the general behavior of convergence rates in this problem stays the same as for the sparse linear regression problem: it fails to converge when ? is too small, and converges geometrically (with a progressively faster rate) as ? increases. Figure 2(b) shows matrix completion also enjoys geometric convergence, for both exactly low-rank (q = 0) and approximately low-rank matrices. log error vs. iterations log error vs. iterations 1 2 0 ? F) log(k?t ? ?k ? F) log(k? t ? ?k 0 ?2 ?=1 ?=5 ? =25 ?4 ?6 ?8 0 ?1 ?2 ?3 ?4 50 100 Iterations 150 ?5 0 200 (a) q=0 q =0.5 q=1 10 20 30 40 Iterations 50 60 (b) b F ) versus number of iterations Figure 2. (a) Plot of log Frobenius error log(|||? ? ?||| in matrix compressed sensing for a matrix size m = 160 with rank R0 = dlog(160)e, and sample sizes n = ?R0 m. For ? = 1, the algorithm oscillates whereas geometric convergence is obtained for ? ? {5, 25}, consistent with the theoretical prediction. (b) Plot of log b F ) versus number of iterations in matrix completion with Frobenius error log(|||?t ? ?||| approximately low rank matrices (q ? {0, 0.5, 1}), showing geometric convergence. t 5 Discussion We have shown that even though high-dimensional M -estimators in statistics are neither strongly convex nor smooth, simple first-order methods can still enjoy global guarantees of geometric convergence. The key insight is that strong convexity and smoothness need only hold in restricted senses, and moreover, these conditions are satisfied with high probability for many statistical models and decomposable regularizers used in practice. Examples include sparse linear regression and `1 -regularization, various statistical models with groupsparse regularization, and matrix regression with nuclear norm constraints. Overall, our results highlight that the properties of M -estimators favorable for fast rates in a statistical sense can also be used to establish fast rates for optimization algorithms. Acknowledgements: AA, SN and MJW were partially supported by grants AFOSR09NL184; SN and MJW acknowledge additional funding from NSF-CDI-0941742. 8 200 References [1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009. [2] S. Becker, J. Bobin, and E. J. Candes. Nesta: a fast and accurate first-order method for sparse recovery. Technical report, Stanford University, 2009. [3] K. Bredies and D. A. Lorenz. Linear convergence of iterative soft-thresholding. Journal of Fourier Analysis and Applications, 14:813?837, 2008. [4] R. Garg and R. Khandekar. Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property. In ICML, New York, NY, USA, 2009. ACM. [5] E. T. Hale, Y. Wotao, and Y. Zhang. Fixed-point continuation for `1 -minimization: Methodology and convergence. SIAM J. on Optimization, 19(3):1107?1130, 2008. [6] S. Ji and J. Ye. An accelerated gradient method for trace norm minimization. In International Conference on Machine Learning, New York, NY, USA, 2009. ACM. [7] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Technical Report UILUENG-09-2214, Univ. Illinois, Urbana-Champaign, July 2009. [8] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. In NIPS Conference, Vancouver, Canada, December 2009. Full length version arxiv:1010.2731v1. [9] S. Negahban and M. J. Wainwright. Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise. Technical report, UC Berkeley, August 2010. arxiv:1009.2118. [10] S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Annals of Statistics, To appear. Originally posted as arxiv:0912.5100. [11] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, New York, 2004. [12] Y. Nesterov. Gradient methods for minimizing composite objective function. Technical Report 76, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain (UCL), 2007. [13] G. Raskutti, M. J. Wainwright, and B. Yu. Minimax rates of estimation for highdimensional linear regression over `q -balls. Technical Report arXiv:0910.2042, UC Berkeley, Department of Statistics, 2009. [14] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue conditions for correlated Gaussian designs. Journal of Machine Learning Research, 11:2241?2259, August 2010. [15] B. Recht. A simpler approach to matrix completion. Journal of Machine Learning Research, 2010. Posted as arXiv:0910.0651v2. [16] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, Vol 52(3):471?501, 2010. [17] A. Rohde and A. Tsybakov. Estimation of high-dimensional low-rank matrices. Technical Report arXiv:0912.5338v2, Universite de Paris, January 2010. [18] J. A. Tropp and A. C. Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory, 53(12):4655?4666, December 2007. [19] M. Yuan, A. Ekici, Z. Lu, and R. Monteiro. Dimension reduction and coefficient estimation in multivariate linear regression. Journal Of The Royal Statistical Society Series B, 69(3):329?346, 2007. 9
3984 |@word multitask:2 trial:2 version:9 polynomial:1 norm:16 c0:3 suitably:1 simulation:2 contraction:2 covariance:2 thereby:1 harder:1 reduction:1 series:3 nesta:1 rkhs:1 past:1 wainwrig:1 existing:1 written:1 additive:1 plot:5 update:3 progressively:1 v:5 greedy:1 selected:1 core:1 iterates:7 provides:1 simpler:1 zhang:1 along:1 c2:2 yuan:1 consists:3 prove:1 shorthand:1 introductory:1 bobin:1 indeed:1 behavior:2 frequently:2 nor:1 multi:1 globally:6 little:2 cardinality:1 becomes:2 begin:1 provided:1 underlying:4 suffice:1 moreover:2 panel:1 what:2 substantially:2 unified:1 sparsification:1 guarantee:9 berkeley:5 firstorder:1 rohde:1 exactly:2 oscillates:1 k2:4 rm:6 control:3 underlie:3 enjoy:1 grant:1 appear:1 positive:6 engineering:1 local:3 understood:1 consequence:4 approximately:3 garg:3 initialization:1 studied:1 challenging:1 limited:1 range:1 statistically:2 averaged:1 fazel:1 practical:1 enforces:1 practice:1 block:2 procedure:4 universal:4 composite:3 matching:1 cannot:2 close:6 context:2 impossible:1 gilbert:1 equivalent:2 imposed:1 deterministic:1 center:1 convex:10 focused:1 decomposable:5 recovery:12 m2:3 estimator:19 insight:1 nuclear:5 notion:6 analogous:2 annals:1 controlling:1 suppose:6 rip:3 exact:3 programming:2 econometrics:1 electrical:1 ensures:1 ran:1 rq:13 intuition:2 convexity:12 nesterov:8 flurry:1 solving:4 efficiency:1 cdi:1 various:11 regularizer:8 univ:1 fast:13 describe:2 neighborhood:1 larger:5 stanford:1 say:2 otherwise:2 precludes:1 compressed:7 statistic:3 itself:1 noisy:1 sequence:1 differentiable:3 eigenvalue:1 ucl:1 product:5 remainder:1 relevant:1 rapidly:1 degenerate:2 description:1 frobenius:3 exploiting:1 convergence:46 regularity:2 optimum:7 generating:2 converges:2 derive:1 illustrate:1 completion:10 stat:10 depending:1 z1n:6 lowrank:1 b0:1 strong:12 predicted:1 involves:2 implies:1 direction:4 radius:3 strictly:3 clarify:2 hold:17 mjw:2 around:1 considered:1 sufficiently:1 wright:1 exp:2 algorithmic:1 early:1 smallest:1 estimation:10 favorable:2 establishes:1 weighted:2 minimization:4 gaussian:3 modified:2 shrinkage:1 corollary:10 focus:2 rank:21 modelling:1 contrast:5 sense:6 dependent:1 membership:1 typically:1 pqn:1 monteiro:1 overall:3 among:3 arg:2 issue:2 logn:2 ill:1 special:5 fairly:1 uc:2 equal:2 once:2 never:1 broad:2 yu:3 icml:1 t2:1 report:6 primarily:1 randomly:1 maxj:1 beck:1 interest:3 severe:1 ekici:1 semidefinite:1 sens:1 regularizers:5 amenable:1 accurate:1 poorer:1 necessary:1 bq:7 orthogonal:1 indexed:1 euclidean:3 taylor:1 theoretical:3 rsc:23 witnessed:1 instance:2 column:2 soft:1 teboulle:1 zn:1 cost:2 entry:3 predictor:2 too:4 eec:1 corrupted:1 combined:2 recht:2 international:1 negahban:6 siam:3 stay:1 sequel:1 concrete:1 satisfied:2 opposed:1 possibly:1 worse:1 parrilo:1 de:1 includes:1 coefficient:4 satisfy:4 depends:1 performed:2 analyze:3 recover:1 parallel:1 candes:1 contribution:2 ass:1 formed:2 square:2 accuracy:2 collaborative:1 variance:1 ensemble:2 lu:1 worth:1 randomness:1 whenever:1 definition:8 nonetheless:1 universite:1 proof:1 gain:1 recall:2 knowledge:1 organized:1 formalize:1 ea:1 originally:1 methodology:1 specify:1 response:1 formulation:1 though:1 strongly:3 correlation:3 hand:1 tropp:1 ganesh:1 quality:1 reveal:1 usa:2 ye:1 k22:1 true:6 verify:3 requiring:1 regularization:8 hence:2 reformulating:1 bredies:1 generalized:1 rsm:19 meaning:5 funding:1 specialized:1 raskutti:2 ji:1 conditioning:1 exponentially:1 slight:1 m1:1 kluwer:1 kwk2:1 refer:2 measurement:1 imposing:2 smoothness:11 rd:3 consistency:1 pm:1 illinois:1 hxi:3 alekh:2 longer:1 multivariate:1 isometry:2 recent:3 showed:3 perspective:1 optimizing:1 belongs:1 dictated:1 certain:1 yi:8 seen:1 minimum:1 additional:3 impose:1 r0:6 converge:2 july:1 ii:2 signal:1 full:4 violate:1 smooth:2 technical:7 faster:5 champaign:1 academic:1 lin:1 ravikumar:1 ensuring:1 prediction:3 variant:4 regression:32 underlies:1 expectation:1 arxiv:6 iteration:15 achieved:1 c1:5 addition:1 whereas:1 grow:2 singular:2 publisher:1 appropriately:2 induced:1 deficient:1 december:2 leveraging:1 structural:2 near:1 noting:1 exceed:1 enough:2 concerned:1 variety:2 zi:2 lasso:7 approaching:1 inner:5 regarding:1 br:7 sahand:2 becker:1 render:1 hessian:2 passing:1 jj:1 remark:1 adequate:1 york:3 useful:1 clear:1 involve:2 tsybakov:1 simplest:1 reduced:1 continuation:1 zj:2 nsf:1 per:2 mat:3 statistics2:1 vol:1 group:2 key:1 groupsparse:1 drawn:3 neither:1 thresholded:2 v1:1 imaging:1 relaxation:1 padded:1 geometrically:1 sum:3 year:1 cone:1 inverse:1 catholic:1 family:1 reader:2 throughout:1 wu:1 oscillation:1 scaling:3 bound:6 guaranteed:5 convergent:1 quadratic:3 yielded:1 constraint:6 aspect:2 fourier:1 argument:1 min:3 martin:1 relatively:1 department:3 structured:2 according:1 ball:4 combination:1 ate:1 slightly:2 smaller:1 wi:7 happens:1 restricted:13 dlog:2 taken:2 equation:1 remains:1 discus:2 describing:1 needed:1 pursuit:1 operation:1 apply:2 observe:3 away:1 v2:2 bii:1 encounter:1 slower:1 denotes:1 include:3 ensure:2 exploit:1 k1:2 establish:6 classical:1 society:1 objective:4 parametric:3 primary:2 usual:3 gradient:9 subspace:2 distance:2 link:1 sensible:1 me:1 khandekar:3 assuming:1 length:4 minimizing:1 unfortunately:2 statement:3 trace:4 negative:1 stated:1 design:5 unknown:5 wotao:1 upper:1 observation:10 urbana:1 finite:3 acknowledge:1 descent:1 january:1 precise:1 arbitrary:1 sharp:1 august:2 canada:1 introduced:1 namely:1 cast:2 required:4 pair:3 connection:2 z1:3 c3:3 paris:1 california:1 louvain:1 established:1 nip:1 address:1 beyond:1 usually:1 sparsity:5 challenge:1 program:6 including:2 max:1 royal:1 wainwright:5 suitable:2 natural:4 regularized:6 recursion:1 minimax:3 identifies:1 incoherent:1 sn:2 review:1 geometric:22 l2:1 literature:1 acknowledgement:1 vancouver:1 loss:17 expect:4 highlight:1 lecture:1 sublinear:3 interesting:4 filtering:1 versus:2 tially:1 sufficient:2 consistent:1 standardization:1 thresholding:2 elsewhere:1 supported:2 enjoys:2 side:2 wide:1 taking:2 sparse:21 tolerance:14 regard:1 dimension:6 author:3 collection:1 far:1 transaction:1 approximate:1 confirm:2 global:15 hhxi:2 reveals:1 xi:10 iterative:3 nature:1 ca:1 obtaining:1 expansion:1 posted:2 main:5 dense:1 bounding:1 noise:6 allowed:1 tl:4 ny:2 precision:13 sub:3 position:1 fails:2 lie:2 theorem:7 specific:5 showing:1 hale:1 sensing:7 offset:1 decay:2 essential:1 hha:1 lorenz:1 effectively:1 ci:1 conditioned:1 chen:1 logarithmic:1 likely:1 ordered:1 partially:1 applies:4 aa:1 corresponds:1 satisfies:5 acm:2 ma:1 goal:1 identity:3 lipschitz:1 feasible:1 hard:1 change:1 typical:4 conservative:1 experimental:1 highdimensional:1 support:1 wainwright1:1 accelerated:2 correlated:1
3,295
3,985
Multiple Kernel Learning and the SMO Algorithm S. V. N. Vishwanathan, Zhaonan Sun, Nawanol Theera-Ampornpunt Purdue University [email protected], [email protected], [email protected] Manik Varma Microsoft Research India [email protected] Abstract Our objective is to train p-norm Multiple Kernel Learning (MKL) and, more generally, linear MKL regularised by the Bregman divergence, using the Sequential Minimal Optimization (SMO) algorithm. The SMO algorithm is simple, easy to implement and adapt, and efficiently scales to large problems. As a result, it has gained widespread acceptance and SVMs are routinely trained using SMO in diverse real world applications. Training using SMO has been a long standing goal in MKL for the very same reasons. Unfortunately, the standard MKL dual is not differentiable, and therefore can not be optimised using SMO style co-ordinate ascent. In this paper, we demonstrate that linear MKL regularised with the p-norm squared, or with certain Bregman divergences, can indeed be trained using SMO. The resulting algorithm retains both simplicity and efficiency and is significantly faster than state-of-the-art specialised p-norm MKL solvers. We show that we can train on a hundred thousand kernels in approximately seven minutes and on fifty thousand points in less than half an hour on a single core. 1 Introduction Research on Multiple Kernel Learning (MKL) needs to follow a two pronged approach. It is important to explore formulations which lead to improvements in prediction accuracy. Recent trends indicate that performance gains can be achieved by non-linear kernel combinations [7,18,21], learning over large kernel spaces [2] and by using general, or non-sparse, regularisation [6, 7, 12, 18]. Simultaneously, efficient optimisation techniques need to be developed to scale MKL out of the lab and into the real world. Such algorithms can help in investigating new application areas and different facets of the MKL problem including dealing with a very large number of kernels and data points. Optimisation using decompositional algorithms such as Sequential Minimal Optimization (SMO) [15] has been a long standing goal in MKL [3] as the algorithms are simple, easy to implement and efficiently scale to large problems. The hope is that they might do for MKL what SMO did for SVMs ? allow people to play with MKL on their laptops, modify and adapt it for diverse real world applications and explore large scale settings in terms of number of kernels and data points. Unfortunately, the standard MKL formulation, which learns a linear combination of base kernels subject to l1 regularisation, leads to a dual which is not differentiable. SMO can not be applied as a result and [3] had to resort to expensive Moreau-Yosida regularisation to smooth the dual. State-ofthe-art algorithms today overcome this limitation by solving an intermediate saddle point problem rather than the dual itself [12, 16]. Our focus, in this paper, is on training p-norm MKL, with p > 1, using the SMO algorithm. More generally, we prove that linear MKL regularised by certain Bregman divergences, can also be trained 1 using SMO. We shift the emphasis firmly back towards solving the dual in such cases. The lp MKL dual is shown to be differentiable and thereby amenable to co-ordinate ascent. Placing the p-norm squared regulariser in the objective lets us efficiently solve the core reduced two variable optimisation problem analytically in some cases and algorithmically in others. Using results from [4, 9], we can compute the lp -MKL Hessian, which brings into play second order variable selection methods which tremendously speed up the rate of convergence [8]. The standard decompositional method proof of convergence [14] to the global optimum holds with minor modifications. The resulting optimisation algorithm, which we call SMO-MKL, is straight forward to implement and efficient. We demonstrate that SMO-MKL can be significantly faster than the state-of-the-art specialised p-norm solvers [12]. We empirically show that the SMO-MKL algorithm is robust with the desirable property that it is not greatly affected within large operating ranges of p. This implies that our algorithm is well suited for learning both sparse, and non-sparse, kernel combinations. Furthermore, SMO-MKL scales well to large problems. We show that we can efficiently combine a hundred thousand kernels in approximately seven minutes or train on fifty thousand points in less than half an hour using a single core on standard hardware where other solvers fail to produce results. The SMO-MKL code can be downloaded from [20]. 2 Related Work Recent trends indicate that there are three promising directions of research for obtaining performance improvements using MKL. The first involves learning non-linear kernel combinations. A framework for learning general non-linear kernel combinations subject to general regularisation was presented in [18]. It was demonstrated that, for feature selection, the non-linear GMKL formulation could perform significantly better not only as compared to linear MKL but also state-of-the-art wrapper methods and filter methods with averaging. Very significant performance gains in terms of pure classification accuracy were reported in [21] by learning a different kernel combination per data point or cluster. Again, the results were better not only as compared to linear MKL but also baselines such as averaging. Similar trends were observed for regression while learning polynomial kernel combinations [7]. Other promising directions which have resulted in performance gains are sticking to standard MKL but combining an exponentially large number of kernels [2] and linear MKL with p-norm regularisers [6, 12]. Thus MKL based methods are beginning to define the state-of-the-art for very competitive applications, such as object recognition on the Caltech 101 database [21] and object detection on the PASCAL VOC 2009 challenge [19]. In terms of optimisation, initial work on MKL leveraged general purpose SDP and QCQP solvers [13]. The SMO+M.-Y. regularisation method of [3] was one of the first techniques that could efficiently tackle medium scale problems. This was superseded by the SILP technique of [17] which could, very impressively, train on a million point problem with twenty kernels using parallelism. Unfortunately, the method did not scale well with the number of kernels. In response, many two-stage wrapper techniques came up [2, 10, 12, 16, 18] which could be significantly faster when the number of training points was reasonable but the number of kernels large. SMO could indirectly be used in some of these cases to solve the inner SVM optimisation. The primary disadvantage of these techniques was that they solved the inner SVM to optimality. In fact, the solution needed to be of high enough precision so that the kernel weight gradient computation was accurate and the algorithm converged. In addition, Armijo rule based step size selection was also very expensive and could involve tens of inner SVM evaluations in a single line search. This was particularly expensive since the kernel cache would be invalidated from one SVM evaluation to the next. The one big advantage of such two-stage methods for l1 -MKL was that they could quickly identify, and discard, the kernels with zero weights and thus scaled well with the number of kernels. Most recently, [12] have come up with specialised p-norm solvers which make substantial gains by not solving the inner SVM to optimality and working with a small active set to better utilise the kernel cache. 3 The lp -MKL Formulation The objective in MKL is to jointly learn kernel and SVM parameters from training data {(xi , yi )}. Given a set of base kernels {Kk } and corresponding P feature maps {?k }, linear MKL aims to learn a linear combination of the base kernels as K = k dk Kk . If the kernel weights are restricted to 2 be non-negative, then the MKL task ? corresponds to learning a standard SVM in the feature space formed by concatenating the vectors dk ?k . The primal can therefore be formulated as Xp X X ? X p p2 1 s. t. yi ( min dk wtk ?k (xi )+b) ? 1??i (1) wtk wk +C ?i + ( dk ) 2 w,b,??0,d?0 2 i k k k The regularisation on the kernel weights is necessary to prevent them from shooting off to infinity. Which regulariser one uses depends on the task at hand. In this Section, we limit ourselves to the p-norm squared regulariser with p > 1. If it is felt that certain kernels are noisy and should be discarded then a sparse solution can be obtained by letting p tend to unity from above. Alternatively, if the application demands dense solutions, then larger values ? of p should be selected. Note that the primal above can be made convex by substituting wk for dk wk to get X X X ? X p p2 1 s. t. yi ( wtk ?k (xi ) + b) ? 1 ? ?i (2) wtk wk /dk + C ?i + ( dk ) min 2 w,b,??0,d?0 2 i k k k We first derive an intermediate saddle point optimisation problem obtained by minimising only w, b and ?. The Lagrangian is X X X ? X p p2 X L = 12 ?i [yi ( wtk ?k (xi ) + b) ? 1 + ?i ] (3) wtk wk /dk + (C ? ?i )?i + ( dk ) ? 2 i i k k k Differentiating with respect to w, b and ? to get the optimality conditions and substituting back results in the following intermediate saddle point problem X ? X p p2 (4) d k ? t Hk ? + ( min max 1t ? ? 12 dk ) d?0 ??A 2 k k where A = {?|0 ? ? ? C1, 1t Y ? = 0}, Hk = Y Kk Y and Y is a diagonal matrix with the labels on the diagonal. Note that most MKL methods end up optimising either this, or a very similar, saddle point problem. To now eliminate d we again form the Lagrangian X ? X p p2 X L = 1t ? ? 21 d k ? t Hk ? + ( dk ) ? ? k dk (5) 2 k k k X p 2 ?L (6) = 0 ? ?( dk ) p ?1 dp?1 = ?k + 21 ?t Hk ? k ?dk k X p 2 X (7) ? ?( dk ) p = dk (?k + 21 ?t Hk ?) k k ? L = 1t ? ? 2 ? X p p2 1 X ( ( (?k + 12 ?t Hk ?)q ) q dk ) = 1t ? ? 2 2? k (8) k where p1 + 1q = 1. Since Hk is positive semi-definite, ?t Hk ? ? 0 and since ?k ? 0 it is clear that the optimal value of ?k is zero. Our lp -MKL dual therefore becomes 2 1 X t ( (? Hk ?)q ) q (9) D ? max 1t ? ? ??A 8? k and the kernel weights can be recovered from the dual variables as ! q1 ? p1 q 1 X t (?t Hk ?) p (? Hk ?)q dk = 2? (10) k Note that our dual objective, unlike the objective in [3], is differentiable with respect to ?. The SMO algorithm can therefore be brought to bear where two variables are selected and optimised using gradient or Newton methods and the process repeated until convergence. Also note that it has sometimes been observed that l2 regularisation can provide better results than l1 [6, 7, 12, 18]. For this special case, when p = q = 2, the reduced two variable problem can be solved analytically. This was one of the primary motivations for choosing the p-norm squared regulariser and placing it in the primal objective (the other was to be consistent with other p-norm formulations [9, 11]). Had we included the regulariser as a primal constraint then the dual would have the q-norm rather than the q-norm squared. Our dual would then be near identical to Eq. (9) in [12]. However, it would then no longer have been possible to solve the two variable reduced problem analytically for the 2-norm special case. 3 4 SMO-MKL Optimisation We now develop the SMO-MKL algorithm for optimising the lp MKL dual. The algorithm has three main components: (a) reduced variable optimisation; (b) working set selection and (c) stopping criterion and kernel caching. We build the SMO-MKL algorithm around the LibSVM code base [5]. 4.1 The Reduced Variable Optimisation The SMO algorithm works by repeatedly choosing two variables (assumed to be ?1 and ?2 without loss of generality in this Subsection) and optimising them while holding all other variables constant. If ?1 ? ?1 + ? and ?2 ? ?2 + s?, the dual simplifies to ?? = argmax (1 + s)? ? L???U 2 1 X ( (ak ?2 + 2bk ? + ck )q ) q 8? (11) k where s = ?y1 y2 , L = (s == +1) ? max(??1 , ??2 ) : max(??1 , ?2 ? C), U = (s == +1) ? min(C ? ?1 , C ? ?2 ) : min(C ? ?1 , ?2 ), ak = H11k + H22k + 2sH12k , bk = ?t (H:1k + sH:2k ) and ck = ?t Hk ?. Unlike as in SMO, ?? can not be found analytically for arbitrary p. Nevertheless, since this is a simple one dimensional concave optimisation problem, we can efficiently find the global optimum using a variety of methods. We tried bisection search and Brent?s algorithm but the Newton-Raphson method worked best ? partly because the one dimensional Hessian was already available from the working set selection step. 4.2 Working Set Selection The choice of which two variables to select for optimisation can have a big impact on training time. Very simple strategies, such as random sampling, can have very little cost per iteration but need many iterations to converge. First and second order working set selection techniques are more expensive per iteration but converge in far fewer iterations. We implement the greedy second order working set selection strategy of [8]. We do not give the variable selection equations due to lack of space but refer the interested reader to the WSS2 method of [8] and our source code [20]. The critical thing is that the selection of the first (second) variable involves computing the gradient (Hessian) of the dual. These are readily derived to be X ?? D = 1 ? dk Hk ? = 1 ? H? (12) k ?2? D = ?H ? 1X ??k f ?1 (?)(Hk ?)(Hk ?)t ? (13) k where ??k f ?1 (?) = (2 ? q)? 2?2q ?k2q?2 + (q ? 1)? q2?q ?kq?2 and ?k q = 1 t ? Hk ? 2? (14) where D has been overloaded to now refer to the dual objective. Rather than compute the gradient ?? D repeatedly, we speed up variable selection by caching, separately for each kernel, Hk ?. The cache needs to be updated every time we change ? in the reduced variable optimisation. However, since only two variables are changed, Hk ? can be updated by summing along just two columns of the kernel matrix. This involves only O(M ) work in all, where M is the number of kernels, since the column sums can be pre-computed for each kernel. The Hessian is too expensive to cache and is recomputed on demand. 4.3 Stopping Criterion and Kernel Caching We terminate the SMO-MKL algorithm when the duality gap falls below a pre-specified threshold. Kernel caching strategies can have a big impact on performance since kernel computations can dominate everything else in some cases. While a few different kernel caching techniques have been explored for SVMs, we stick to the standard one used in LibSVM [5]. A Least Recently Used (LRU) cache is implemented as a circular queue. Each element in the queue is a pointer to a recently accessed (common) row of each of the individual kernel matrices. 4 5 Special Cases and Extensions We briefly discuss a few special cases and extensions which impact our SMO-MKL optimisation. 5.1 2-Norm MKL As we noted earlier, 2-norm MKL has sometimes been found to outperform MKL trained with l1 regularisation [6, 7, 12, 18]. For this special case, when p = q = 2, our dual and reduced variable optimisation problems simplify to polynomials of degree four 1 X t D2 ? max 1t ? ? (? Hk ?)2 (15) ??A 8? k 1 X ?? = argmax (1 + s)? ? (ak ?2 + 2bk ? + ck )2 (16) 8? L???U k ? Just as in standard SMO, ? can now be found analytically by using the expressions for the roots of a cubic. This makes our SMO-MKL algorithm particularly efficient for p = 2 and our code defaults to the analytic solver for this special case. 5.2 The Bregman Divergence as a Regulariser The Bregman divergence generalises the squared p-norm. It is not a metric as it is not symmetric and does not obey the triangle inequality. In this Subsection, we demonstrate that our MKL formulation can also incorporate the Bregman divergence as a regulariser. Let F be any differentiable, strictly convex function and f = ?F represent its gradient. The Bregman divergence generated by F is given by rF (d) = F (d) ? F (d0 ) ? (d ? d0 )t f (d0 ). Note that ?rF (d) = f (d) ? f (d0 ). Incorporating the Bregman divergence as a regulariser in our primal objective leads to the following intermediate saddle point problem and Lagrangian X IB ? min max 1t ? ? 12 dk ?t Hk ? + ?rF (d) (17) d?0 ??A LB = 1t ? ? X k dk (?k + 12 ?t Hk ?) + ?rF (d) ?d LB = 0 ? f (d) ? f (d0 ) = g(?, ?)/? ?d=f ?1 (18) k (19) (f (d0 ) + g(?, ?)/?) = f ?1 (?(?, ?)) (20) 1 t 2 ? Hk ? where g is a vector with entries gk (?, ?) = ?k + and ?(?, ?) = f (d0 ) + g(?, ?)/?. Substituting back in the Lagrangian and discarding terms dependent on just d0 results in the dual DR ? max ??A,??0 1t ? + ?(F (f ?1 (?)) ? ? t f ?1 (?)) (21) In many cases the optimal value of ? will turn out to be zero and the optimisation can efficiently be carried out over ? using our SMO-MKL algorithm. Generalised KL Divergence To take a concrete example, different from the p-norm squared used thus far, we investigate the use of the generalised KL divergence as a regulariser. Choosing F (d) = P k dk (log(dk ) ? 1) leads to the generalised KL divergence between d and d0 X X X rKL (d) = dk log(dk /d0k ) ? dk + d0k (22) k k k Plugging in rKL in IB and following the steps above leads to the following dual problem X t 1 max 1t ? ? ? d0k e 2? ? Hk ? ??A (23) k which can be optimised straight forwardly using our SMO-MKL algorithm once we plug in the gradient and hessian information. However, discussing this further would take us too far out of the scope of this paper. We therefore stay focused on lp -MKL for the remainder of this paper. 5 5.3 Regression and Other Loss Functions While we have discussed MKL based classification so far we can easily adapt our formulation to handle other convex loss functions such as regression, novelty detection, etc. We demonstrate this for the ?-insensitive loss function for regression. The primal, intermediate saddle point and final dual problems are given by X X ? X p p2 + ? t 1 w w /d + C (? + ? ) + ( dk ) PR ? min (24) k k k i i 2 2 w,b,?? ?0,d?0 i k k X such that ? ( (25) wtk ?k (xi ) + b ? yi ) ? ? + ?i? k IR ? min max DR ? max d?0 ?|?|?C1, 1t ?=0 0?|?|?C1, 1t ?=0 t 1 (Y ? ? ?|?|) ? 1t (Y ? ? ?|?|) ? 1 2 X d k ? t Kk ? + k ? X p p2 ( dk ) 2 2 1 X t ( (? Kk ?)q ) q 8? (26) k (27) k SMO has a slightly harder time optimising DR due to the |?| term which, though in itself not differentiable, can be gotten around by substituting ? = ?+ ? ?? at the cost of doubling the number of dual variables. 6 Experiments In this Section, we empirically compare the performance of our proposed SMO-MKL algorithm against the specialised lp -MKL solver of [12] which is referred to as Shogun. Code, scripts and parameter settings were helpfully provided by the authors and we ensure that our stopping criteria are compatible. All experiments are carried out on a single core of an AMD 2380 2.5 GHz processor with 32 Gb RAM. Our focus in these experiments is purely on training time and speed of optimisation as the prediction accuracy improvements of lp -MKL have already been documented [12]. We carry out two sets of experiments. The first, on small scale UCI data sets, are carried out using pre-computed kernels. This performs a direct comparison of the algorithmic components of SMOMKL and Shogun. We also carry out a few large scale experiments with kernels computed on the fly. This experiment compares the two methods in totality. In this case, kernel caching can have an effect, but not a significant one as the two methods have very similar caching strategies. For each UCI data set we generated kernels as recommended in [16]. We generated RBF kernels with ten bandwidths for each individual dimension of the feature vector as well as the full feature vector itself. Similarly, we also generated polynomial kernels of degrees 1, 2 and 3. All kernels matrices were pre-computed and normalised to have unit trace. We set C = 100 as it gives us a reasonable accuracy on the test set. Note that for some value of ?, SMO-MKL and Shogun will converge to exactly the same solution [12]. Since this value is not known a priori we arbitrarily set ? = 1. Training times on the UCI data sets are presented in Table 1. Means and standard deviations are reported for five fold cross-validation. As can be seen, SMO-MKL is significantly faster than Shogun at converging to similar solutions and obtaining similar test accuracies. In many cases, SMO-MKL is more than four times as fast and in some case more than ten or twenty times as fast. Note that our test classification accuracy on Liver is a lot lower than Shogun?s. This is due to the arbitrary choice of ?. We can vary our ? on Liver to recover the same accuracy and solution as Shogun with a further decrease in our training time. Another very positive thing is that SMO-MKL appears to be relatively stable across a large operating range of p. The code is, in most of the cases as expected, fastest when p = 2 and gets slower as one increases or decreases p. Interestingly though, the algorithm doesn?t appear to be significantly slower for other values of p. Therefore, it is hoped that SMO-MKL can be used to learn sparse kernel combinations as well as non-sparse ones. Moving on to the large scale experiments with kernels computed on the fly, we first tried combining a hundred thousand RBF kernels on the Sonar data set with 208 points and 59 dimensional features. 6 Table 1: Training times on UCI data sets with N training points, D dimensional features, M kernels and T test points. Mean and standard deviations are reported for 5-fold cross validation. (a) Australian: N =552, T =138, D=13, M =195. Training Time (s) Test Accuracy (%) # Kernels Selected p SMO-MKL Shogun SMO-MKL Shogun SMO-MKL Shogun 1.10 4.89 ? 0.31 58.52 ? 16.49 85.22 ? 2.96 85.22 ? 2.81 26.4 ? 0.8 137.2 ? 53.8 62.4 ? 4.7 1.33 4.16 ? 0.16 33.58 ? 2.58 85.36 ? 3.79 85.07 ? 2.85 40.8 ? 1.3 1.66 4.31 ? 0.19 31.89 ? 1.25 85.65 ? 3.73 85.07 ? 2.85 72.2 ? 4.8 100.2 ? 3.7 2.00 4.27 ? 0.10 27.08 ? 7.18 85.80 ? 3.74 85.22 ? 2.99 126.4 ? 4.3 134.4 ? 5.6 2.33 4.88 ? 0.18 24.92 ? 6.46 85.80 ? 3.74 85.07 ? 2.85 162.8 ? 3.6 177.8 ? 8.3 2.66 5.19 ? 0.05 26.90 ? 2.05 85.80 ? 3.68 85.22 ? 2.85 188.2 ? 4.7 188.8 ? 5.1 3.00 5.48 ? 0.21 27.06 ? 2.20 85.51 ? 3.69 85.22 ? 2.85 192.0 ? 2.6 194.4 ? 1.2 p 1.10 1.33 1.66 2.00 2.33 2.66 3.00 p 1.10 1.33 1.66 2.00 2.33 2.66 3.00 p 1.10 1.33 1.66 2.00 2.33 2.66 3.00 (b) Ionosphere: N =280, T =71, D=33, M =442. Training Time (s) Test Accuracy (%) # Kernels Selected SMO-MKL Shogun SMO-MKL Shogun SMO-MKL Shogun 2.85 ? 0.16 19.82 ? 4.02 92.60 ? 1.35 92.03 ? 1.68 50.0 ? 2.7 125.2 ? 7.3 2.78 ? 1.18 8.49 ? 0.61 92.03 ? 1.42 92.60 ? 1.86 120.8 ? 6.0 217.0 ? 23.4 2.42 ? 0.28 10.49 ? 2.27 91.74 ? 2.08 91.74 ? 1.37 200.8 ? 4.4 291.4 ? 33.0 2.16 ? 0.16 13.99 ? 4.68 92.03 ? 1.68 91.17 ? 2.45 328.0 ? 6.6 364.2 ? 15.4 2.35 ? 0.25 24.90 ? 9.43 92.03 ? 1.68 91.74 ? 2.08 413.6 ? 5.6 412.2 ? 6.6 2.50 ? 0.32 33.05 ? 3.66 92.03 ? 1.68 92.03 ? 1.68 430.6 ? 4.6 436.6 ? 4.3 3.03 ? 0.99 36.23 ? 3.62 92.31 ? 1.41 91.75 ? 2.05 434.4 ? 4.8 442.0 ? 0.0 (c) Liver: N =276, T =69, D=5, M =91. Training Time (s) Test Accuracy (%) SMO-MKL Shogun SMO-MKL Shogun 0.53 ? 0.03 2.15 ? 0.12 62.90 ? 9.81 66.67 ? 9.91 0.54 ? 0.03 0.92 ? 0.05 66.09 ? 8.48 71.59 ? 8.92 0.56 ? 0.04 1.14 ? 0.23 66.96 ? 7.53 70.72 ? 9.28 0.54 ? 0.04 1.72 ? 0.57 66.96 ? 7.06 72.17 ? 6.94 0.63 ? 0.03 2.35 ? 0.36 66.38 ? 7.36 73.33 ? 6.71 0.65 ? 0.02 2.53 ? 0.44 65.22 ? 6.80 72.75 ? 7.96 0.67 ? 0.03 3.40 ? 0.55 65.22 ? 6.74 73.91 ? 7.28 (d) Sonar: N =166, T =42, D=59, M =793. Training Time (s) Test Accuracy (%) SMO-MKL Shogun SMO-MKL Shogun 4.95 ? 0.29 47.19 ? 3.85 85.15 ? 7.99 81.25 ? 8.71 4.00 ? 0.76 18.28 ? 1.63 84.65 ? 9.37 87.03 ? 6.85 4.48 ? 1.63 20.27 ? 8.84 88.47 ? 6.68 87.51 ? 6.28 3.31 ? 0.31 31.52 ? 5.07 88.94 ? 6.00 88.95 ? 6.33 3.54 ? 0.35 51.83 ? 17.96 88.94 ? 4.97 88.94 ? 5.41 3.83 ? 0.38 64.59 ? 9.19 88.94 ? 4.97 88.94 ? 4.97 3.96 ? 0.45 70.08 ? 9.18 88.94 ? 4.97 89.92 ? 5.13 # Kernels Selected SMO-MKL Shogun 9.40 ? 1.02 39.40 ? 1.50 24.40 ? 2.06 43.60 ? 2.42 44.20 ? 2.23 57.00 ? 3.29 71.00 ? 5.29 78.00 ? 2.28 82.40 ? 2.42 88.20 ? 1.72 83.20 ? 2.32 90.80 ? 0.40 85.20 ? 3.37 91.00 ? 0.00 # Kernels Selected SMO-MKL Shogun 91.2 ? 6.9 258.0 ? 24.8 247.8 ? 7.7 374.2 ? 20.9 383.0 ? 5.7 451.6 ? 12.0 661.2 ? 10.2 664.8 ? 35.2 770.8 ? 4.4 763.0 ? 7.0 782.0 ? 3.4 789.4 ? 2.8 786.0 ? 4.1 792.2 ? 1.1 Note that these kernels do not form any special hierarchy so approaches such as [2] are not applicable. Timing results on a log-log scale are given in Figure (1a). As can be seen, SMO-MKL appears to be scaling linearly with the number of kernels and we converge in less than half an hour on all hundred thousand kernels for both p = 2 and p = 1.33. If we were to run the same experiment using pre-computed kernels then we converge in approximately seven minutes (see Fig (1b)). On the other hand, Shogun took six hundred seconds to combine just ten thousand kernels computed on the fly. The trend was the same when we increased the number of training points. Figure (1c) and (1d) plot timing results on a log-log scale as the number of training points is varied on the Adult and Web data sets (please see [1] for data set details and downloads). We used 50 kernels computed on the 7 Sonar 7 6 9 8 5.5 log(Time) (s) log(Time) (s) log(Time) (s) 6 8 10 SMO?MKL p=1.33 SMO?MKL p=2.00 5 6.5 Web Adult 7 SMO?MKL p=1.33 SMO?MKL p=2.00 4 3 2 SMO?MKL p=1.33 SMO?MKL p=2.00 Shogun p=1.33 Shogun p=2.00 7 SMO?MKL p=1.33 SMO?MKL p=2.00 6 log(Time) (s) Sonar 7.5 7 6 5 4 5 4 3 3 5 4.5 9 1 9.5 10 10.5 11 log(# Kernels) 11.5 12 (a) Sonar 0 6 2 2 7 8 9 10 log(# Kernels) 11 12 (b) Sonar Pre-computed 1 7 7.5 8 8.5 9 9.5 log(# Training Points) (c) Adult 10 10.5 1 7.5 8 8.5 9 9.5 10 log(# Training Points) 10.5 11 (d) Web Figure 1: Large scale experiments varying the number of kernels and points. See text for details. fly for these experiments. On Adult, till about six thousand points, SMO-MKL is roughly 1.5 times faster than Shogun for p = 1.33 and 5 times faster for p = 2. However, on reaching eleven thousand points, Shogun starts taking more and more time to converge and we could not get results for sixteen thousand points or more. SMO-MKL was unaffected and converged on the full data set with 32,561 points in 9245.80 seconds for p = 1.33 and 8511.12 seconds for p = 2. We tried the Web data set to see whether the SMO-MKL algorithm would scale beyond 32K points. Training on all 49,749 points and 50 kernels took 1574.73 seconds (i.e. less than half an hour) with p = 1.33 and 2023.35 seconds with p = 2. 7 Conclusions We developed the SMO-MKL algorithm for efficiently optimising the lp -MKL formulation. We placed the emphasis firmly back on optimising the MKL dual rather than the intermediate saddle point problem on which all state-of-the-art MKL solvers are based. We showed that the lp -MKL dual is differentiable and that placing the p-norm squared regulariser in the primal objective lets us analytically solve the reduced variable problem for p = 2. We could also solve the convex, onedimensional reduced variable problem when p 6= 2 by the Newton-Raphson method. A second-order working set selection algorithm was implemented to speed up convergence. The resulting algorithm is simple, easy to implement and efficiently scales to large problems. We also showed how to generalise the algorithm to handle not just p-norms squared but also certain Bregman divergences. In terms of empirical performance, we compared the SMO-MKL algorithm to the specialised lp MKL solver of [12] referred to as Shogun. It was demonstrated that SMO-MKL was significantly faster than Shogun on both small and large scale data sets ? sometimes by an order of magnitude. SMO-MKL was also found to be relatively stable for various values of p and could therefore be used to learn both sparse, and non-sparse, kernel combinations. We demonstrated that the algorithm could combine a hundred thousand kernels on Sonar in approximately seven minutes using precomputed kernels and in less than half an hour using kernels computed on the fly. This is significant as many non-linear kernel combination forms, which lead to performance improvements but are non-convex, can be recast as convex linear MKL with a much larger set of base kernels. The SMOMKL algorithm can now be used to tackle such problems as long as an appropriate regulariser can be found. We were also able to train on the entire Web data set with nearly fifty thousand points and fifty kernels computed on the fly in less than half an hour. Other solvers were not able to return results on these problems. All experiments were carried out on a single core and therefore, we believe, redefine the state-of-the-art in terms of MKL optimisation. The SMO-MKL code is available for download from [20]. Acknowledgements We are grateful to Saurabh Gupta, Marius Kloft and Soren SSonnenburg for helpful discussions, feedback and help with Shogun. References [1] http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html. 8 [2] F. R. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In NIPS, pages 105?112, 2008. [3] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In ICML, pages 6?13, 2004. [4] A. Ben-Tal, T. Margalit, and A. Nemirovski. The ordered subsets mirror descent optimization method with applications to tomography. SIAM Journal of Opimization, 12(1):79?108, 2001. [5] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/?cjlin/libsvm. [6] C. Cortes, M. Mohri, and A. Rostamizadeh. L2 regularization for learning kernels. In UAI, 2009. [7] C. Cortes, M. Mohri, and A. Rostamizadeh. Learning non-linear combinations of kernels. In NIPS, 2009. [8] R. E. Fan, P. H. Chen, and C. J. Lin. Working set selection using second order information for training SVM. JMLR, 6:1889?1918, 2005. [9] C. Gentile. Robustness of the p-norm algorithms. ML, 53(3):265?299, 2003. [10] M. Gonen and E. Alpaydin. Localized multiple kernel learning. In ICML, 2008. [11] J. Kivinen, M. K. Warmuth, and B. Hassibi. The p-norm generaliziation of the LMS algorithm for adaptive filtering. IEEE Trans. Signal Processing, 54(5):1782?1793, 2006. [12] M. Kloft, U. Brefeld, S. Sonnenburg, P. Laskov, K.-R. Muller, and A. Zien. Efficient and accurate lp -norm Multiple Kernel Learning. In NIPS, 2009. [13] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel matrix with semidefinite programming. JMLR, 5:27?72, 2004. [14] C. J. Lin, S. Lucidi, L. Palagi, A. Risi, and M. Sciandrone. Decomposition algorithm model for singly linearly-constrained problems subject to lower and upper bounds. JOTA, 141(1):107?126, 2009. [15] J. Platt. Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods ? Support Vector Learning, pages 185?208, 1999. [16] A. Rakotomamonjy, F. Bach, Y. Grandvalet, and S. Canu. SimpleMKL. JMLR, 9:2491?2521, 2008. [17] S. Sonnenburg, G. Raetsch, C. Schaefer, and B. Schoelkopf. Large scale multiple kernel learning. JMLR, 7:1531?1565, 2006. [18] M. Varma and B. R. Babu. More generality in efficient multiple kernel learning. In ICML, 2009. [19] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection. In ICCV, 2009. [20] S. V. N. Vishwanathan, Z. Sun, N. Theera-Ampornpunt, and M. Varma, 2010. The SMO-MKL code http://research.microsoft.com/?manik/code/SMO-MKL/download.html. [21] J. Yang, Y. Li, Y. Tian, L. Duan, and W. Gao. Group-sensitive multiple kernel learning for object categorization. In ICCV, 2009. 9
3985 |@word briefly:1 polynomial:3 norm:23 d2:1 tried:3 decomposition:1 q1:1 d0k:3 thereby:1 harder:1 carry:2 initial:1 wrapper:2 interestingly:1 recovered:1 com:2 readily:1 eleven:1 analytic:1 plot:1 half:6 selected:6 fewer:1 greedy:1 warmuth:1 beginning:1 core:5 pointer:1 accessed:1 five:1 along:1 direct:1 shooting:1 prove:1 combine:3 redefine:1 expected:1 indeed:1 roughly:1 p1:2 sdp:1 voc:1 duan:1 little:1 cache:5 solver:10 becomes:1 provided:1 laptop:1 medium:1 what:1 q2:1 developed:2 lru:1 every:1 concave:1 tackle:2 shogun:26 exactly:1 scaled:1 stick:1 platt:1 unit:1 appear:1 positive:2 generalised:3 timing:2 modify:1 limit:1 ak:3 optimised:3 simplemkl:1 approximately:4 might:1 downloads:1 emphasis:2 co:2 fastest:1 nemirovski:1 range:2 tian:1 silp:1 implement:5 definite:1 area:1 empirical:1 significantly:7 vedaldi:1 pre:6 get:4 selection:13 www:2 map:1 demonstrated:3 lagrangian:4 convex:6 focused:1 simplicity:1 pure:1 rule:1 dominate:1 varma:4 handle:2 updated:2 hierarchy:1 play:2 today:1 programming:1 lucidi:1 us:1 regularised:3 lanckriet:2 trend:4 element:1 expensive:5 recognition:1 particularly:2 database:1 observed:2 csie:2 fly:6 solved:2 thousand:12 forwardly:1 schoelkopf:1 sun:2 sonnenburg:2 decrease:2 alpaydin:1 substantial:1 yosida:1 cristianini:1 trained:4 grateful:1 solving:3 purely:1 efficiency:1 triangle:1 easily:1 routinely:1 various:1 train:5 fast:3 choosing:3 schaefer:1 larger:2 solve:5 jointly:1 itself:3 noisy:1 final:1 advantage:1 differentiable:7 brefeld:1 took:2 remainder:1 uci:4 combining:2 till:1 sticking:1 convergence:4 cluster:1 optimum:2 produce:1 categorization:1 ben:1 object:4 help:2 derive:1 develop:1 stat:2 liver:3 minor:1 eq:1 p2:8 implemented:2 c:1 involves:3 indicate:2 implies:1 come:1 australian:1 direction:2 gotten:1 filter:1 libsvmtools:1 everything:1 pronged:1 ntu:2 extension:2 strictly:1 exploring:1 hold:1 around:2 scope:1 algorithmic:1 lm:1 substituting:4 vary:1 purpose:1 applicable:1 label:1 sensitive:1 hope:1 brought:1 aim:1 rather:4 ck:3 caching:7 reaching:1 varying:1 derived:1 focus:2 improvement:4 greatly:1 hk:23 tremendously:1 baseline:1 rostamizadeh:2 helpful:1 dependent:1 stopping:3 el:1 eliminate:1 entire:1 margalit:1 interested:1 dual:22 classification:3 pascal:1 html:2 priori:1 art:7 special:7 constrained:1 saurabh:1 once:1 sampling:1 optimising:6 placing:3 identical:1 icml:3 nearly:1 others:1 simplify:1 few:3 simultaneously:1 divergence:12 resulted:1 individual:2 argmax:2 ourselves:1 microsoft:3 detection:3 acceptance:1 circular:1 investigate:1 evaluation:2 sh:1 semidefinite:1 primal:7 amenable:1 accurate:2 bregman:9 necessary:1 minimal:3 increased:1 column:2 earlier:1 facet:1 disadvantage:1 retains:1 cost:2 deviation:2 entry:1 subset:1 rakotomamonjy:1 hundred:6 kq:1 too:2 reported:3 siam:1 kloft:2 stay:1 standing:2 off:1 quickly:1 concrete:1 squared:9 again:2 leveraged:1 dr:3 brent:1 resort:1 style:1 return:1 li:1 wk:5 babu:1 depends:1 manik:3 script:1 root:1 lot:1 lab:1 competitive:1 recover:1 start:1 gulshan:1 formed:1 ir:1 accuracy:11 efficiently:9 ofthe:1 identify:1 bisection:1 straight:2 processor:1 converged:2 unaffected:1 against:1 vishy:1 proof:1 gain:4 invalidated:1 subsection:2 back:4 appears:2 soren:1 follow:1 response:1 zisserman:1 formulation:8 though:2 generality:2 furthermore:1 just:5 stage:2 until:1 k2q:1 working:8 hand:2 web:5 lack:1 mkl:97 widespread:1 brings:1 believe:1 effect:1 y2:1 analytically:6 regularization:1 symmetric:1 please:1 noted:1 criterion:3 demonstrate:4 performs:1 l1:4 recently:3 common:1 empirically:2 exponentially:1 insensitive:1 million:1 discussed:1 onedimensional:1 significant:3 refer:2 raetsch:1 canu:1 similarly:1 had:2 moving:1 stable:2 longer:1 operating:2 etc:1 base:5 recent:2 showed:2 discard:1 certain:4 inequality:1 binary:1 came:1 discussing:1 arbitrarily:1 yi:5 muller:1 caltech:1 seen:2 gentile:1 converge:6 novelty:1 recommended:1 signal:1 semi:1 zien:1 multiple:11 desirable:1 full:2 d0:9 smooth:1 generalises:1 faster:7 adapt:3 plug:1 minimising:1 long:3 raphson:2 cross:2 bach:3 totality:1 lin:3 plugging:1 impact:3 prediction:2 converging:1 regression:4 optimisation:18 metric:1 iteration:4 kernel:85 sometimes:3 represent:1 gmkl:1 achieved:1 c1:3 addition:1 separately:1 else:1 source:1 fifty:4 unlike:2 ascent:2 subject:3 tend:1 thing:2 jordan:2 call:1 near:1 yang:1 intermediate:6 easy:3 enough:1 variety:1 bandwidth:1 inner:4 simplifies:1 shift:1 whether:1 expression:1 six:2 bartlett:1 gb:1 regularisers:1 queue:2 hessian:5 repeatedly:2 generally:2 clear:1 involve:1 singly:1 ten:4 hardware:1 svms:3 tomography:1 reduced:9 documented:1 http:3 outperform:1 algorithmically:1 per:3 diverse:2 affected:1 group:1 recomputed:1 four:2 nevertheless:1 threshold:1 prevent:1 libsvm:4 ram:1 sum:1 run:1 reasonable:2 reader:1 scaling:1 bound:1 laskov:1 fold:2 fan:1 rkl:2 vishwanathan:2 infinity:1 constraint:1 worked:1 software:1 qcqp:1 felt:1 tal:1 speed:4 optimality:3 min:8 relatively:2 marius:1 combination:12 across:1 slightly:1 unity:1 lp:12 tw:2 modification:1 wtk:7 restricted:1 pr:1 ghaoui:1 iccv:2 equation:1 discus:1 turn:1 fail:1 precomputed:1 needed:1 cjlin:2 letting:1 end:1 available:3 obey:1 hierarchical:1 indirectly:1 appropriate:1 sciandrone:1 robustness:1 slower:2 ensure:1 newton:3 risi:1 build:1 objective:9 already:2 strategy:4 primary:2 diagonal:2 gradient:6 dp:1 seven:4 amd:1 reason:1 code:9 kk:5 unfortunately:3 holding:1 decompositional:2 gk:1 trace:1 negative:1 regulariser:11 twenty:2 perform:1 upper:1 datasets:1 discarded:1 purdue:4 descent:1 y1:1 varied:1 arbitrary:2 lb:2 download:2 ordinate:2 bk:3 overloaded:1 specified:1 kl:3 smo:70 hour:6 nip:3 trans:1 adult:4 beyond:1 able:2 parallelism:1 below:1 gonen:1 challenge:1 rf:4 including:1 max:10 recast:1 critical:1 kivinen:1 firmly:2 library:1 conic:1 superseded:1 carried:4 text:1 l2:2 acknowledgement:1 regularisation:8 loss:4 bear:1 impressively:1 limitation:1 filtering:1 sixteen:1 localized:1 validation:2 downloaded:1 degree:2 xp:1 consistent:1 grandvalet:1 row:1 compatible:1 changed:1 mohri:2 placed:1 allow:1 normalised:1 generalise:1 india:1 fall:1 taking:1 differentiating:1 sparse:8 moreau:1 ghz:1 overcome:1 default:1 dimension:1 world:3 feedback:1 doesn:1 forward:1 made:1 author:1 adaptive:1 far:4 dealing:1 ml:1 global:2 active:1 investigating:1 uai:1 summing:1 assumed:1 xi:5 alternatively:1 search:2 sonar:7 table:2 promising:2 learn:4 terminate:1 robust:1 obtaining:2 did:2 dense:1 main:1 linearly:2 big:3 motivation:1 repeated:1 fig:1 referred:2 cubic:1 precision:1 hassibi:1 concatenating:1 ib:2 jmlr:4 learns:1 minute:4 discarding:1 explored:1 dk:28 svm:8 gupta:1 ionosphere:1 cortes:2 incorporating:1 sequential:3 gained:1 mirror:1 magnitude:1 hoped:1 demand:2 gap:1 chen:1 jota:1 suited:1 specialised:5 explore:2 saddle:7 gao:1 ordered:1 doubling:1 chang:1 utilise:1 corresponds:1 goal:2 formulated:1 rbf:2 towards:1 change:1 included:1 averaging:2 partly:1 duality:2 select:1 people:1 support:3 armijo:1 incorporate:1
3,296
3,986
Optimal Web-scale Tiering as a Flow Problem Novi Quadrianto SML-NICTA & RSISE-ANU Canberra, ACT, Australia [email protected] Gilbert Leung eBay, Inc. San Jose, CA, USA [email protected] Kostas Tsioutsiouliklis Yahoo! Labs Sunnyvale, CA, USA [email protected] Alexander J. Smola Yahoo! Research Santa Clara, CA, USA [email protected] Abstract We present a fast online solver for large scale parametric max-flow problems as they occur in portfolio optimization, inventory management, computer vision, and logistics. Our algorithm solves an integer linear program in an online fashion. It exploits total unimodularity of the constraint matrix and a Lagrangian relaxation to solve the problem as a convex online game. The algorithm generates approximate solutions of max-flow problems by performing stochastic gradient descent on a set of flows. We apply the algorithm to optimize tier arrangement of over 84 million web pages on a layered set of caches to serve an incoming query stream optimally. 1 Introduction Parametric flow problems have been well-studied in operations research [7]. It has received a significant amount of contributions and has been applied in many problem areas such as database record segmentation [2], energy minimization for computer vision [10], critical load factor determination in two-processor systems [16], end-of-session baseball elimination [6], and most recently by [19, 18, 20] in product portfolio selection. In other words, it is a key technique for many estimation and assignment problems. Unfortunately many algorithms proposed in the literature are geared towards thousands to millions of objects rather than billions, as is common in web-scale problems. Our motivation for solving parametric flow is the problem of webpage tiering for search engine indices. While our methods are entirely general and could be applied to a range of other machine learning and optimization problems, we focus on webpage tiering as the illustrative example in this paper. The rationale for choosing this application is threefold: firstly, it is a real problem in search engines. Secondly, it provides very large datasets. Thirdly, in doing so we introduce a new problem to the machine learning community. That said, our approach would also be readily applicable to very large scale versions of the problems described in [2, 16, 6, 19]. The specific problem that will provide our running example is that of assigning webpages to several tiers of a search engine cache such that the time to serve a query is minimized. For a given query, a search engine returns a number of documents (typically 10). The time it takes to serve a query depends on where the documents are located. The first tier (or cache) is the fastest (using premium hardware, etc. thus also often the smallest) and retrieves its documents with little latency. If even just a single document is located in a back tier, the delay is considerably increased since now we need to search the larger (and slower) tiers until the desired document is found. Hence it is our goal to assign the most popular documents to the fastest tiers while taking the interactions between documents into account. 1 2 The Tiering Problem We would like to allocate documents d ? D into k tiers of storage at our disposal. Moreover, let q ? Q be the queries arriving at a search engine, with finite values vq > 0 (e.g. the probability of the query, possibly weighted by the relevance of the retrieved results), and a set of documents Dq retrieved for the query. This input structure is stored in a bipartite graph G with vertices V = D ? Q and edges (d, q) ? E whenever document d should be retrieved for query q. The k tiers, with tier 1 as the most desirable and k the least (most costly for retrieval), form an increasing sequence of cummulative capacities Ct , with Ct indicating how many pages can be stored by tiers t0 ? t together. Without loss of generality, assume Ck?1 < |D| (that is, the last tier is required to hold all documents, or the problem can be reduced). Finally, for each t ? 2 we assume that there is a penalty pt?1 > 0 incurred by a tier-miss at level t (known as ?fallthrough? from tier t ? 1 to tier t). And since we have to access tier 1 regardless, we set p0 = 0 for convenience. For instance, retrieving a page in tier 3 incurs a total penalty of p1 + p2 . 2.1 Background Optimization of index structures and data storage is a key problem in building an efficient search engine. Much work has been invested into building efficient inverted indices which are optimized for query processing [17, 3]. These papers all deal with the issue of optimizing the data representation for a given query and how an inverted index should be stored and managed for general queries. In particular, [3, 14] address the problem of computing the top-k results without scanning over the entire inverted lists. Recently, machine learning algorithms have been proposed [5] to improve the ordering within a given collection beyond the basic inverted indexing setup [3]. A somewhat orthogonal strategy to this is to decompose the collection of webpages into a number of disjoint tiers [15] ordered in decreasing level of relevance. That is, documents are partitioned according to their relevance for answering queries into different tiers of (typically) increasing size. This leads to putting the most frequently retrieved or the most relevant (according to the value of query, the market or other operational parameters) pages into the top tier with the smallest latency and relegating the less frequently retrieved or the less relevant pages into bottom tiers. Since queries are often carried out by sequentially searching this hierarchy of tiers, an improved ordering minimizes latency, improves user satisfaction, and it reduces computation. A naive implementation of this approach would simply assign a value to each page in the index and arrange them such that the most frequently accessed pages reside in the highest levels of the cache. Unfortunately this approach is suboptimal: in order to answer a given query well a search engine typically does not only return a single page as a result but rather returns a list of r (typically r = 10) pages. This means that if even just one of these pages is found at a much lower tier, we either need to search the backtiers to retrieve this page or alternatively we need to sacrifice result relevance. At first glance, the problem is daunting: we need to take all correlations among pages induced by user queries into account. Moreover, for reasons of practicality we need to design an algorithm which is linear in the amount of data presented (i.e. the number of queries) and whose storage requirements are only linear in the number of pages. Finally, we would like to obtain guarantees in terms of performance for the assignment that we obtain from the algorithm. Our problem, even for r = 2, is closely related to the weighted k-densest subgraph problem, which is NP hard [13]. 2.2 Optimization Problem Since the problem we study is somewhat more general than the parametric flow problem we give a self-contained derivation of the problem and derive the more general version beyond [7]. For brevity, we relegate all proofs to the Appendix. We denote the result set for query q by Dq := {d : (d, q) ? G}, and similarly, the set of queries seeking for a document d by Qd := {q : (d, q) ? G}. For a document d we denote by zd ? {1, . . . , k} the tier storing d. Define uq := max zd d?Dq 2 (1) as the number of cache levels we need to traverse to answer query q. In other words, it is the document found in the worst tier which determines the cost of access. Integrating the optimization over uq we may formulate the tiering problem as an integer program: uq ?1 minimize z,u X vq X pt subject to zd ? uq ? k for all (q, d) ? G and t=1 q?Q X {zd ? t} ? Ct ? t. d?D (2) Note that we replaced the maximization condition (1) by a linear inequality in preparation for a reformulation as an integer linear program. Obviously, the optimal uq for a given z will satisfy (1). Lemma 1 Assume that Ck ? |D| > Ck?1 . Then there exists an optimal solution of (2) such that P {z ? t} = Ct for all 1 ? t < k. d d In the following we address several issues associated with the optimization problem: A) Eq. (2) is an integer program and consequently it is discrete and nonconvex. We show that there exists a convex reformulation of the problem. B) It is at a formidable scale (often |D| > 109 ). Section 3.4 presents a stochastic gradient descent procedure to solve the problem in few passes through the database. C) We have insufficient data for an accurate tier assignment for pages associated with tail queries. This can be addressed by a smoothing estimator for the tier index of a page. 2.3 Integer Linear Program We now replace the selector variables zd and uq by binary variables via a ?thermometer? code. Let x ? {0; 1} D?(k?1) subject to xdt ? xd,t+1 for all d, t (3a) Q?(k?1) y ? {0; 1} subject to yqt ? yq,t+1 for all q, t (3b) P be index variables. Thus we have the one-to-one mapping zd = 1 + t xdt and xdt = {zd > t} between z and x. For instance, for k = 5, a middle tier z = 3 maps into x = (1, 1, 0, 0) (requiring two fallthroughs), and the best tier z = 1 corresponds to x = (0, 0, 0, 0). The mapping between u and y is analogous. The constraint uq ? zd can simply be rewritten coordinate-wise yqt ? xdt . P Finally, the capacity constraints assume the form d xdt ? |D| ? Ct . That is, the number of pages allocated to higher tiers are at least |D| ? Ct . Define remaining capacities C?t := |D| ? Ct and use the variable transformation (1) we have the following integer linear program: minimize v > yp (4a) x,y subject to xdt ? xd,t+1 and yqt ? yq,t+1 and yqt ? xdt for all (q, d) ? G P ? d xdt ? Ct for all 1 ? t ? k ? 1 D?(k?1) x ? {0; 1} ; y ? {0; 1} > Q?(k?1) (4b) (4c) (4d) > where p = (p1 , . . . , pk?1 ) and v = (v1 , . . . , v|Q| ) are column vectors, and y a matrix (yqt ). The advantage of (4) is that while still discrete, we now have linear constraints and a linear objective function. The only problem is that the variables x and y need to be binary. Lemma 2 The solutions of (2) and (4) are equivalent. 2.4 Hardness Before discussing convex relaxations and approximation algorithms it is worthwhile to review the hardness of the problem: consider only two tiers, and a case where we retrieve only two pages per query. The corresponding graph has vertices D and edges (d, d0 ) ? E, whenever d and d0 are displayed together to answer a query. In this case the tiering problem reduces to one of finding a subset of vertices D0 ? D such that the induced subgraph has the largest number (possibly weighted) of edges subject to the capacity constraint |D0 | ? C. For the case of k pages per query, simply assume that k ? 2 of the pages are always the same. Hence the problem of finding the best subset reduces to the case of 2 pages per query. This problem is identical to the k-densest subgraph problem which is known to be NP hard [13]. 3 URL que ry ry que URL 3 Figure 1: k-densest subgraph reduction. Vertices correspond to URLs and queries correspond to edges. Queries can be served whenever the corresponding URLs are in the cache. This is the case whenever the induced subgraph contains the edge. Convex Programming The key idea in solving (4) is to relax the capacity constraints for the tiers. This renders the problem totally unimodular and therefore amenable to a solution by a linear program. We replace the capacity constraint by a partial Lagrangian. This does not ensure that we will be able to meet the capacity constraints exactly anymore. Instead, we will only be able to state ex-post that the relaxed solution is optimal for the observed capacity distribution. Moreover, we are still able to control capacity by a suitable choice of the associated Lagrange multipliers. 3.1 Linear Program Instead of solving (4) we study the linear program: minimize v > yp ? 1> x? subject to xdt ? xd,t+1 and yqt ? yq,t+1 x,y (5) yqt ? xdt for (q, d) ? G and xdt , yqt ? [0, 1] Here ? = (?1 , . . . , ?k?1 )> act as Lagrange multipliers ?t ? 0 for enforcing capacity constraints and 1 denotes a column of |D| ones. We now relate the solution of (5) to that of (4). Lemma 3 For any choice of ? with ?t ? 0 the linear program (5) has an integral solution, Pi.e. there ? ? {0; 1} which minimize (5). Moreover, for C?t = d x?dt the exists some x? , y ? satisfying x?dt , yqt solution (x? , y ? ) also solves (4). We have succeeded in reducing the complexity of the problem to that of a linear program, yet it is still formidable and it needs to be solved to optimality for an accurate caching prescription. Moreover, we need to adjust ? such that we satisfy the desired capacity constraints (approximately). P Lemma 4 Denote by L? (?) the value of (5) at the solution of (5) and let L(?) := L? (?)+ t C?t ?t . Hence L(?) is concave in ? and moreover, L(?) is maximized for a choice of ? where the solution of (5) satisfies the constraints of (4). Note that while the above two lemmas provide us with a guarantee that for every ? and for every associated integral solution of (5) there exists a set of capacity constraints for which this is optimal and that such a capacity satisfying constraint can be found efficiently by concave maximization, they do not guarantee the converse: not every capacity constraint can be satisfied by the convex relaxation, as the following example demonstrates. Example 1 Consider the case of 2 tiers (hence we drop the index t), a single query q and 3 documents d. Set the capacity constraint of the first tier to 1. In this case it is impossible to avoid a cache miss in the ILP. In the LP relaxation of (4), however, the optimal (non-integral) solution is to set all xd = 31 and yq = 13 . The partial Lagrangian L(?) is maximized for ? = ?p/3. Moreover, for ? < ?p/3 the optimization problem (5) has as its solution x = y = 1; whereas for ? > ?p/3 the solution is x = y = 0. For the critical value any convex combination of those two values is valid. This example shows why the optimal tiering problem is NP hard ? it is possible to design cases where the tier assignment for a page is highly ambiguous. Note that for the integer programming problem with capacity constraint C = 2 we could allocate an arbitrary pair of pages to the cache. This does not change the objective function (total cache miss) or feasibility. 4 pages pages queries ? queries ? ? ? s (1-vq) (1-vq) t s t Figure 2: Left: maximum flow problem for a problem of 4 pages and 3 queries. The minimum cut of the directed graph needs to sever all pages leading to a query or alternatively it needs to sever the corresponding query incurring a penalty of (1 ? vq ). This is precisely the tiering objective function for the case of two tiers. Right: the same query graph for three tiers. Here the black nodes and dashed edges represent a copy of the original graph ? additionally each page in the original graph also has an infinite-capacity link to the corresponding query in the additional graph. 3.2 Graph Cut Equivalence It is well known that the case of two tiers (k = 2) can be relaxed to a min-cut, max-flow problem [7, 4]. The transformation works by designing a bipartite graph between queries q and documents d. All documents are connected to the source s by edges with capacity ? and queries are connected to the sink t with capacity (1 ? vq ). Documents d retrieved for a query q are connected to q with capacity ?. Figure 2 provides an example of such a maximum-flow, minimum-cut graph from source s to sink t. The conversion to several tiers is slightly more involved. Denote by vdi vertices associated with document d and tier i and moreover, denote by wqi vertices associated with a query q and tier i. Then the graph is given by edges (s, vdi ) with capacities ?i ; edges (vdi , wqi0 ) for all (document, query) pairs and for all i ? i0 , endowed with infinite capacity; and edges (wqi , t) with capacity (1 ? vq ). As with the simple caching problem, we need to impose a cut on any query edge for which not all incoming page edges have been cut. The key difference is that in order to benefit from storing pages in a better tier we need to guarantee that the page is contained in the lower tier, too. 3.3 Variable Reduction We now simplify the relaxed problem (5) further by reducing the number of variables, without sacrificing integrality of the solution. A first step is to substitute yqt = maxd?Dq xdt , to obtain an optimization problem over the documents alone:   minimize v > max xdt p ? 1> x? subject to xdt ? xdt0 for t0 > t and xdt ? [0, 1] (6) x d?Dq Note that the monotonicity condition yqt ? yqt0 for t0 > t is automatically inherited from that of x. The solution of (6) is still integral since the problem is equivalent to one with integral solution. Lemma 5 We may scale pt and ?t together by constants ?t > 0, such that p0t /pt = ?t = ?0t /?t . The resulting solution of this new problem (6) with (p0 , ?0 ) is unchanged. Essentially, problem (5) as parameterized by (p, ?) yields solutions which form equivalence classes. Consequently for the convenience of solving (5), we may assume p0t = 1 for t ? 1. We only need to consider the original p for evaluating the objective using solution z (thus, same observed capacities Ct ). Since (5) is a relaxation of (4) this reformulation can be extended to the integer linear program, too. Moreover, under reasonable conditions on the capacity constraints, there is more structure in ?. Lemma 6 Assume that C?t is monotonically decreasing and that pt = 1 for t ? 1. Then any choice of ? satisfying the capacity constraints is monotonically non-increasing. 5 Algorithm 1 Tiering Optimization Initialize all zd = 0 Initialize n = 100 for i = 1 to MAXITER do for all q ? Q do ? = ?1n (learning rate) n ? n + 1 (increment counter) Update z ? z ? ??x `q (z) Project z to [1, k]D via zd ? max(1, min(k, zd )) end for end for Algorithm 2 Deferred updates Observe current time n0 Read timestamp n for document d Compute update steps ? = ?(n0 , n) repeat j = bzd + 1c (next largest tier) t = (j ? zd )/?j (change needed to reach next tier) if t > ? then ? = 0 and zd ? zd + ?j ? (partial step; we are done) else ? ? ? ? t and zd ? zd + 1 (full step; next tier) end if until ? = 0 (no more updates) or zd = k ? 1 (bottom tier) One interpretation of this is that, unless the tiers are increasingly inexpensive, the optimal solution would assign pages in a fashion yielding empty middle tiers (the remaining capacities C?t not strictly decreasing). This monotonicity simplifies the problem. Consequently, we exploit this fact to complete the variable reduction. Define ??i := ?i ? ?i+1 for i ? 1 (all non-negative by virtue of Lemma 6) and f? (?) := ??1 ? + k?2 X ??i max(0, i ? ?) for ? ? [0, k-1]. (7) i=1 Note that by construction ?? f? (?) = ??i whenever ? ? (i ? 1, i). The function f? is clearly convex, which helps describe our tiering problem via the following convex program   X minimize v > max zd + f? (zd ? 1) for zd ? [1, k] (8) z d?Dq d We now use only one variable per document. Moreover, the convex constraints are simple box constraints. This simplifies convex projections, as needed for online programming. Lemma 7 The solution of (8) is equivalent to that of (5). 3.4 Online Algorithm We now turn our attention to a fast algorithm for minimizing (8). While greatly simplified relative to (2) it still remains a problem of billions of variables. The key observation is that the objective function of (8) can be written as sum over the following loss functions 1 X lq (z) := vq max zd + f? (zd ? 1) (9) d?Dq |Q| d where |Q| denotes the cardinality of the query set. The transformation suggests a simple stochastic gradient descent optimization algorithm: traverse the input stream by queries, and update the values of xd of all those documents d that would need to move into the next tier in order to reduce service time for a query. Subsequently, perform a projection of the page vectors to the set [1, k] to ensure that we do not assign pages to non-existent tiers. Algorithm 1 proceeds by processing the input query-result records (q, vq , Dq ) as a stream comprising the set of pages that need to be displayed to answer a given query. More specifically, it updates the tier preferences of the pages that have the lowest tier scores for each level and it decrements the preferences for all other pages. We may apply results for online optimization algorithms [1] to show that a small number of passes through the dataset suffice. p Lemma 8 The solution obtained by Algorithm 1 converges at rate O( (log T )/T ) to its minimum value. Here T is the number of queries processed. 6 3.5 Deferred and Approximate Updates The naive implementation of algorithm 1 is infeasible as it would require us to update all |D| coordinates of xd for each query q. However, it is possible to defer the updates until we need to inspect zd directly. The key idea is to exploit that for all zd with d 6? Dq the updates only depend on the value of zd at update time (Section A.1) and that f? is piecewise linear and monotonically decreasing. 3.6 Path Following The tiering problem has the appealing property [19] that the solutions for increasing ? form a nested subset. In other words, relaxing capacity constraints never demotes but only promotes pages. This fact can be used to design specialized solvers which work well at determining the entire solution path at once for moderate-sized problems [19]. Alternatively, we can simply take advantage of solutions for successive values of ? in determining an approximate solution path by using the solution for ? as initialization for ?0 . This strategy is well known as path-following in numerical optimization. In this context it is undesirable to solve the optimization for a particular value of ? to optimality. Instead, we simply solve it approximately (using a small number of passes) and readjust ?. Due to the nesting property [19] and the fact that the optimal solutions are binary (via total unimodularity) the average over solutions on the entire path provides an ordering of pages into tiers. Thus, Lemma 9 Denote by xd (?) the solution of the two-tier optimization problem for a given value of R ?0 ?. Moreover, denote by ?d := [?0 ? ?]?1 ? xd (?) the average value over a range of Lagrange multipliers. Then ?d provides an order for sorting documents into tiers for the entire range [?, ?0 ]. In practice1 , we only choose a finite number of steps for near-optimal solutions. This yields Algorithm 3 Path Following Initialize all (xdt ) = zd ? [1, k] for each ? ? ? do Refine variables xdt (?) by Algorithm 1 using a small number of iterations. end for P Average the variables xdt = ??? xdt (?)/|?| Sort the documents with the resulting total scores zd Fill the ordered documents to tier 1, then tier 2, etc. 4 Experiments show that using synthetic data (where it was feasible to compute and compare with the optimal LP solution pointwise) even |?| = 5 values of ? produce nearoptimal results in the two-tier case. Moreover, we may carry out the optimization procedure for several parameters simultaneously. This is advantageous since the main cost is sequential RAM read-write access rather than CPU speed. Experiments To examine the efficacy of our algorithm at web-scale we tested it with real data from a major search engine. The results of our proposed methods are compared to those of the max and sum heuristics in Section A.2. We also performed experiments on small synthetic data (2-tier and 3-tier), where we were able to show that our algorithm converges to exact solution given by an LP solver (Appendix C). However, since LP solvers are very slow, it is not feasible for web-scale problems. We processed the logs for one week of September 2009 containing results from the top geographic regions which include a majority of the search engine?s user base. To simplify the heavy processing involved for collecting such a massive data set, we only record whether a particular result, defined as a (query, document) pair, appears in top 10 (first result page) for a given session and we aggregate the view counts of such results, which will be used for the session value vq once. In its entirety this subset contains about 108 viewed documents and 1.6 ? 107 distinct queries. We excluded results viewed only once, yielding a final data set of 8.4 ? 107 documents.2 For simplicity, our experiments are carried out for a two-tier (single cache) system such that the only design parameter is the relative 1 This result can be readily extended to k > 2, and any probability measure over a set of Lagrangian values ? ? ? ? Rk?1 so long as there are positive weights around the values yielding all the nested solutions. + 2 The search results for any fixed query vary for a variety of reasons, e.g. database updates. We approximate the session graph by treating queries with different result sets as if they were different. This does not change 7 Figure 3: Left: Experimental results for real web-search data with 8.4 ? 107 pages and 1.6 ? 107 queries. Session miss rate for the online procedure, the max and sum heuristics (A.2). (The yaxis is normalized such that SUM-tier?s first point is at 1). As seen, the max heuristic cannot be optimal for any but small cache sizes, but it performs comparably well to Online. Right: ?Online? is outperforming MAX for cache size larger than 60%, sometimes more than twofold. size of the prime tier (the cache). The ranking variant of our online Algorithm 3 (30 passes over the data) consistently outperforms the max and sum heuristics over a large span of cache sizes (Figure 3). Direct comparison can now be made between our online procedure and the max and sum heuristics since each one induces a ranking on the set of documents. We then calculate the session miss rate of each procedure at any cache size, and report the relative improvement of our online algorithm as ratios of miss rates in Figure 3?Right. The optimizer fits well in a desktop?s RAM since 5 values of ? only amount to about 2GB of singleprecision x(?). We measure a throughput of approximately 0.5 million query-sessions per second (qps) for this version, and about 2 million qps for smaller problems (as they incur fewer memory page faults). Billion-scale problems can readily fit in 24GB of RAM by serializing computation one ? value at a time. We also implemented a multi-thread version utilizing 4 CPU cores, although its performance did not improve since memory and disk bandwidth limits have already been reached. 5 Discussion We showed that very large tiering and densest subset optimization problems can be solved efficiently by a relatively simple online optimization procedure (Some extensions are in Appendix B). It came somewhat as a surprise that the max heuristic often works nearly as well as the optimal tiering solution. Since we experienced this correlation on both synthetic and real data we believe that it might be possible to prove approximation guarantees for this strategy whenever the bipartite graphs satisfy certain power-law properties. Some readers may question the need for a static tiering solution, given that data could, in theory, be reassigned between different caching tiers on the fly. The problem is that in production systems of a search engine, such reassignment of large amounts of data may not always be efficient for operational reasons (e.g. different versions of the ranking algorithm, different versions of the index, different service levels, constraints on transfer bandwidth). In addition to that, tiering is a problem not restricted to the provision of webpages. It occurs in product portfolio optimization and other resource constrained settings. We showed that it is possible to solve such problems at several orders of magnitude larger scale than what was previously considered feasible. Acknowledgments We thank Kelvin Fong for providing computer facilities. NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. This work was carried out while GL and NQ were with Yahoo! Labs. the optimization problem and keeps the model accurate. Moreover, we remove rare results by maintaining that the lowest count of a document is at least as large as the square root of the highest within the same session. 8 References [1] P. Bartlett, E. Hazan, and A. Rakhlin. Adaptive online gradient descent. In J. C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, NIPS 20, Cambridge, MA, 2008. [2] M. J. Eisner and D. G. Severance. Mathematical techniques for efficient record segmentation in large shared databases. J. ACM, 23(4):619?635, 1976. [3] R. Fagin. Combining fuzzy information from multiple systems. In Fifteenth ACM SIGACTSIGMOD-SIGART Symposium on Principles of Database Systems, pages 216?226, Montreal, Canada, 1996. [4] L. R. Ford and D. R. Fulkerson. Maximal flow through a network. Canadian Journal of Mathematics, 8:399?404, 1956. [5] S. Goel, J. Langford, and A. Strehl. Predictive indexing for fast search. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, NIPS, pages 505?512. MIT Press, 2008. [6] D. Gusfield and C. U. Martel. A fast algorithm for the generalized parametric minimum cut problem and applications. Algorithmica, 7(5&6):499?519, 1992. ? Tardos. A faster parametric minimum-cut algorithm. Algorithmica, [7] D. Gusfield and E. 11(3):278?290, 1994. [8] I. Heller and C. Tompkins. An extension of a theorem of dantzig?s. In H. Kuhn and A. Tucker, editors, Linear Inequalities and Related Systems, volume 38 of Annals of Mathematics Studies. AMS, 1956. [9] J. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604?632, 1999. [10] V. Kolmogorov, Y. Boykov and C. Rother. Applications of parametric maxflow in computer vision. ICCV, 1?8, 2007. [11] Y. Nesterov and J.-P. Vial. Confidence level solutions for stochastic programming. Technical Report 2000/13, Universit?e Catholique de Louvain - Center for Operations Research and Economics, 2000. [12] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Technologies Project, Stanford, CA, USA, Nov. 1998. [13] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, New Jersey, 1982. [14] M. Persin, J. Zobel, and R. Sacks-Davis. Filtered document retrieval with frequency-sorted indexes. JASIS, 47(10):749?764, 1996. [15] K. M. Risvik, Y. Aasheim, and M. Lidal. Multi-tier architecture for web search engines. In LA-WEB, pages 132?143. IEEE Computer Society, 2003. [16] H. S. Stone. Critical load factors in two-processor distributed systems. IEEE Trans. Softw. Eng., 4(3):254?258, 1978. [17] H. Yan, S. Ding, and T. Suel. Inverted index compression and query processing with optimized document ordering. In J. Quemada, G. Le?on, Y. Maarek, and W. Nejdl, editors, 18th International Conference on World Wide Web, Madrid, Spain, pages 401?410. ACM, 2009. [18] B. Zhang, J. Ward, and A. Feng. A simultaneous maximum flow algorithm for the selection model. Technical Report HPL-2005-91, Hewlett Packard Laboratories, 2005. [19] B. Zhang, J. Ward, and Q. Feng. A simultaneous parametric maximum-flow algorithm for finding the complete chain of solutions. Technical Report HPL-2004-189, Hewlett Packard Laboratories, 2004. [20] B. Zhang, J. Ward, and Q. Feng. Simultaneous parametric maximum flow algorithm with vertex balancing. Technical Report HPL-2005-121, Hewlett Packard Laboratories, 2005. 9
3986 |@word middle:2 version:6 compression:1 advantageous:1 disk:1 eng:1 p0:2 incurs:1 carry:1 reduction:3 contains:2 score:2 efficacy:1 document:35 outperforms:1 current:1 com:2 clara:1 gmail:1 assigning:1 yet:1 readily:3 written:1 numerical:1 remove:1 drop:1 treating:1 update:12 n0:2 alone:1 fewer:1 nq:1 desktop:1 core:1 record:4 filtered:1 provides:4 node:1 traverse:2 preference:2 org:1 firstly:1 accessed:1 thermometer:1 successive:1 mathematical:1 zhang:3 direct:1 symposium:1 retrieving:1 prove:1 introduce:1 excellence:1 sacrifice:1 hardness:2 market:1 p1:2 frequently:3 examine:1 ry:2 multi:2 decreasing:4 automatically:1 cpu:2 little:1 quad:1 cache:15 solver:4 increasing:4 relegating:1 totally:1 project:2 moreover:13 cardinality:1 formidable:2 suffice:1 spain:1 maxiter:1 lowest:2 what:1 minimizes:1 fuzzy:1 finding:3 transformation:3 guarantee:5 every:3 collecting:1 act:2 concave:2 xd:8 unimodular:1 fagin:1 exactly:1 universit:1 demonstrates:1 platt:1 control:1 converse:1 kelvin:1 before:1 service:2 positive:1 limit:1 meet:1 path:6 approximately:3 black:1 might:1 initialization:1 studied:1 dantzig:1 equivalence:2 suggests:1 relaxing:1 fastest:2 range:3 directed:1 acknowledgment:1 procedure:6 maxflow:1 area:1 yan:1 projection:2 word:3 integrating:1 confidence:1 convenience:2 undesirable:1 layered:1 selection:2 cannot:1 storage:3 context:1 impossible:1 prentice:1 gilbert:1 optimize:1 map:1 lagrangian:4 equivalent:3 center:1 regardless:1 attention:1 economics:1 convex:10 formulate:1 simplicity:1 estimator:1 nesting:1 utilizing:1 fill:1 retrieve:2 fulkerson:1 searching:1 coordinate:2 increment:1 analogous:1 tardos:1 annals:1 pt:5 hierarchy:1 construction:1 user:3 densest:4 programming:4 exact:1 massive:1 designing:1 satisfying:3 located:2 cut:8 database:5 winograd:1 bottom:2 observed:2 fly:1 ding:1 solved:2 worst:1 thousand:1 calculate:1 region:1 connected:3 ordering:4 counter:1 highest:2 environment:1 complexity:2 nesterov:1 existent:1 depend:1 solving:4 incur:1 serve:3 baseball:1 bipartite:3 predictive:1 sink:2 represented:1 retrieves:1 kolmogorov:1 jersey:1 derivation:1 distinct:1 fast:4 describe:1 query:55 aggregate:1 choosing:1 que:2 whose:1 heuristic:6 larger:3 solve:5 stanford:2 relax:1 tsioutsiouliklis:1 ward:3 invested:1 ford:1 final:1 online:14 obviously:1 sequence:1 advantage:2 hyperlinked:1 interaction:1 product:2 maximal:1 relevant:2 combining:1 subgraph:5 roweis:1 billion:3 webpage:5 empty:1 requirement:1 motwani:1 produce:1 converges:2 qps:2 object:1 help:1 derive:1 montreal:1 received:1 eq:1 p2:1 solves:2 implemented:1 entirety:1 australian:2 qd:1 kuhn:1 closely:1 stochastic:4 subsequently:1 australia:1 elimination:1 sunnyvale:1 brin:1 require:1 government:1 assign:4 decompose:1 vdi:3 secondly:1 strictly:1 extension:2 hold:1 around:1 considered:1 hall:1 mapping:2 week:1 suel:1 major:1 arrange:1 vary:1 smallest:2 optimizer:1 estimation:1 applicable:1 combinatorial:1 council:1 largest:2 weighted:3 minimization:1 mit:2 clearly:1 always:2 rather:3 ck:3 caching:3 avoid:1 serializing:1 focus:1 improvement:1 consistently:1 greatly:1 am:1 economy:1 leung:1 i0:1 typically:4 entire:4 koller:2 comprising:1 issue:2 among:1 yahoo:4 smoothing:1 constrained:1 initialize:3 timestamp:1 once:3 never:1 softw:1 identical:1 novi:2 throughput:1 nearly:1 minimized:1 report:6 np:3 simplify:2 piecewise:1 few:1 papadimitriou:1 simultaneously:1 replaced:1 algorithmica:2 highly:1 adjust:1 deferred:2 yielding:3 hewlett:3 chain:1 amenable:1 accurate:3 edge:12 integral:5 partial:3 succeeded:1 orthogonal:1 unless:1 desired:2 sacrificing:1 increased:1 instance:2 column:2 assignment:4 maximization:2 cost:2 vertex:7 subset:5 rare:1 delay:1 too:2 optimally:1 stored:3 nearoptimal:1 answer:4 scanning:1 fong:1 considerably:1 synthetic:3 international:1 together:3 satisfied:1 management:1 containing:1 choose:1 possibly:2 leading:1 return:3 yp:2 account:2 de:1 zobel:1 sml:1 inc:2 satisfy:3 ranking:4 depends:1 stream:3 performed:1 view:1 root:1 lab:2 doing:1 hazan:1 reached:1 sort:1 inherited:1 defer:1 contribution:1 minimize:6 square:1 efficiently:2 maximized:2 correspond:2 yield:2 comparably:1 served:1 processor:2 simultaneous:3 reach:1 whenever:6 inexpensive:1 energy:1 frequency:1 involved:2 tucker:1 proof:1 associated:6 static:1 dataset:1 popular:1 improves:1 segmentation:2 provision:1 back:1 appears:1 disposal:1 higher:1 dt:2 improved:1 daunting:1 done:1 box:1 generality:1 just:2 smola:2 hpl:3 until:3 correlation:2 langford:1 web:10 glance:1 believe:1 usa:4 building:2 requiring:1 multiplier:3 managed:1 geographic:1 normalized:1 hence:4 facility:1 read:2 excluded:1 laboratory:3 deal:1 game:1 self:1 ambiguous:1 unimodularity:2 illustrative:1 davis:1 reassignment:1 generalized:1 stone:1 complete:2 performs:1 wise:1 recently:2 boykov:1 common:1 specialized:1 volume:1 million:4 thirdly:1 tail:1 interpretation:1 significant:1 cambridge:1 ebay:1 mathematics:2 session:8 similarly:1 centre:1 portfolio:3 funded:1 sack:1 access:3 geared:1 etc:2 yqt:11 base:1 showed:2 alum:1 retrieved:6 optimizing:1 moderate:1 prime:1 certain:1 nonconvex:1 inequality:2 binary:3 came:1 outperforming:1 discussing:1 maxd:1 fault:1 martel:1 inverted:5 seen:1 minimum:5 additional:1 somewhat:3 relaxed:3 impose:1 goel:1 monotonically:3 dashed:1 full:1 desirable:1 multiple:1 reduces:3 d0:4 technical:5 faster:1 determination:1 long:1 retrieval:2 prescription:1 post:1 promotes:1 feasibility:1 variant:1 basic:1 vision:3 essentially:1 fifteenth:1 iteration:1 represent:1 sometimes:1 background:1 whereas:1 addition:1 addressed:1 else:1 source:3 allocated:1 bringing:1 pass:4 cummulative:1 induced:3 subject:7 flow:14 integer:8 near:1 canadian:1 bengio:1 variety:1 fit:2 architecture:1 bandwidth:2 suboptimal:1 p0t:2 reduce:1 idea:2 simplifies:2 t0:3 whether:1 thread:1 allocate:2 bartlett:1 url:4 gb:2 penalty:3 render:1 sever:2 latency:3 santa:1 amount:4 vial:1 hardware:1 processed:2 induces:1 reduced:1 disjoint:1 per:5 zd:27 discrete:2 threefold:1 write:1 key:6 putting:1 reformulation:3 integrality:1 v1:1 ram:3 graph:13 relaxation:5 sum:6 jose:1 parameterized:1 reasonable:1 reader:1 appendix:3 entirely:1 ct:9 refine:1 occur:1 constraint:22 precisely:1 alex:1 generates:1 kleinberg:1 speed:1 optimality:2 min:2 span:1 performing:1 relatively:1 department:1 according:2 combination:1 smaller:1 slightly:1 increasingly:1 nejdl:1 partitioned:1 lp:4 appealing:1 restricted:1 indexing:2 iccv:1 tier:65 resource:1 vq:10 remains:1 previously:1 turn:1 count:2 ilp:1 needed:2 singer:1 end:5 tompkins:1 operation:2 rewritten:1 incurring:1 endowed:1 apply:2 observe:1 worthwhile:1 uq:7 anymore:1 slower:1 original:3 substitute:1 top:4 running:1 remaining:2 ensure:2 denotes:2 include:1 xdt:19 maintaining:1 exploit:3 practicality:1 eisner:1 readjust:1 society:1 unchanged:1 feng:3 seeking:1 objective:5 move:1 arrangement:1 already:1 question:1 occurs:1 parametric:9 costly:1 strategy:3 said:1 september:1 gradient:4 link:1 thank:1 capacity:28 majority:1 reason:3 nicta:2 enforcing:1 rother:1 code:1 index:11 pointwise:1 insufficient:1 ratio:1 minimizing:1 providing:1 setup:1 unfortunately:2 relate:1 sigart:1 negative:1 implementation:2 design:4 perform:1 conversion:1 inspect:1 observation:1 datasets:1 gusfield:2 finite:2 descent:4 displayed:2 logistics:1 extended:2 communication:1 steiglitz:1 arbitrary:1 community:1 canada:1 pair:3 rsise:1 required:1 optimized:2 engine:11 louvain:1 nip:2 trans:1 address:2 beyond:2 able:4 proceeds:1 program:14 pagerank:1 max:16 memory:2 packard:3 power:1 critical:3 satisfaction:1 suitable:1 jasis:1 improve:2 technology:1 yq:4 library:1 carried:3 naive:2 review:1 literature:1 ict:1 heller:1 determining:2 relative:3 law:1 loss:2 rationale:1 digital:2 authoritative:1 incurred:1 dq:9 editor:4 principle:1 storing:2 pi:1 heavy:1 production:1 strehl:1 balancing:1 repeat:1 last:1 copy:1 arriving:1 infeasible:1 gl:1 catholique:1 wide:1 taking:1 benefit:1 distributed:1 valid:1 evaluating:1 world:1 reside:1 collection:2 made:1 san:1 simplified:1 adaptive:1 approximate:4 selector:1 citation:1 nov:1 yaxis:1 keep:1 monotonicity:2 sequentially:1 incoming:2 alternatively:3 search:16 why:1 additionally:1 reassigned:1 transfer:1 ca:4 operational:2 schuurmans:1 inventory:1 bottou:1 did:1 pk:1 main:1 decrement:1 motivation:1 quadrianto:1 canberra:1 broadband:1 madrid:1 fashion:2 slow:1 kostas:2 experienced:1 lq:1 answering:1 rk:1 theorem:1 load:2 specific:1 list:2 rakhlin:1 virtue:1 exists:4 sequential:1 magnitude:1 anu:1 sorting:1 surprise:1 simply:5 relegate:1 lagrange:3 ordered:2 contained:2 corresponds:1 nested:2 determines:1 satisfies:1 acm:4 ma:1 goal:1 sized:1 viewed:2 consequently:3 sorted:1 towards:1 twofold:1 replace:2 shared:1 feasible:3 hard:3 change:3 infinite:2 specifically:1 reducing:2 miss:6 lemma:11 total:5 experimental:1 premium:1 la:1 indicating:1 alexander:1 relevance:4 brevity:1 preparation:1 tested:1 ex:1
3,297
3,987
Natural Policy Gradient Methods with Parameter-based Exploration for Control Tasks Atsushi Miyamae?? , Yuichi Nagata? , Isao Ono? , Shigenobu Kobayashi? ?: Department of Computational Intelligence and Systems Science Tokyo Institute of Technology, Kanagawa, Japan ?: Research Fellow of the Japan Society for the Promotion of Science {miyamae@fe., nagata@fe., isao@, kobayasi@}dis.titech.ac.jp Abstract In this paper, we propose an efficient algorithm for estimating the natural policy gradient using parameter-based exploration; this algorithm samples directly in the parameter space. Unlike previous methods based on natural gradients, our algorithm calculates the natural policy gradient using the inverse of the exact Fisher information matrix. The computational cost of this algorithm is equal to that of conventional policy gradients whereas previous natural policy gradient methods have a prohibitive computational cost. Experimental results show that the proposed method outperforms several policy gradient methods. 1 Introduction Reinforcement learning can be used to handle policy search problems in unknown environments. Policy gradient methods [22, 20, 5] train parameterized stochastic policies by climbing the gradient of the average reward. The advantage of such methods is that one can easily deal with continuous state-action and continuing (not episodic) tasks. Policy gradient methods have thus been successfully applied to several practical tasks [11, 21, 16]. In the domain of control, a policy is often constructed with a controller and an exploration strategy. The controller is represented by a domain-appropriate pre-structured parametric function. The exploration strategy is required to seek the parameters of the controller. Instead of directly perturbing the parameters of the controller, conventional exploration strategies perturb the resulting control signal. However, a significant problem with the sampling strategy is that the high variance in their gradient estimates leads to slow convergence. Recently, parameter-based exploration [18] strategies that search the controller parameter space by direct parameter perturbation have been proposed, and these have been demonstrated to work more efficiently than conventional strategies [17, 18, 13]. Another approach to speeding up policy gradient methods is to replace the gradient with the natural gradient [2], the so-called natural policy gradient [9, 4, 15]; this is motivated by the intuition that a change in the policy parameterization should not influence the result of the policy update. The combination of parameter-based exploration strategies and the natural policy gradient is expected to result in improvements in the convergence rate; however, such an algorithm has not yet been proposed. However, natural policy gradients with parameter-based exploration strategies have a disadvantage in that the computational cost is high. The natural policy gradient requires the computation of the inverse of the Fisher information matrix (FIM) of the policy distribution; this is prohibitively expensive, especially for a high-dimensional policy. Unfortunately, parameter-based exploration strategies tend to have higher dimensions than control-based ones. Therefore, the expected method is difficult to apply for realistic control tasks. 1 In this paper, we propose a new reinforcement learning method that combines the natural policy gradient and parameter-based exploration. We derive an efficient algorithm for estimating the natural policy gradient with a particular exploration strategy implementation. Our algorithm calculates the natural policy gradient using the inverse of the exact FIM and the Monte Carlo-estimated gradient. The resulting algorithm, called natural policy gradients with parameter-based exploration (NPGPE), has a computational cost similar to that of conventional policy gradient algorithms. Numerical experiments show that the proposed method outperforms several policy gradient methods, including the current state-of-the-art NAC [15] with control-based exploration. 2 Policy Search Framework We consider the standard reinforcement learning framework in which an agent interacts with a Markov decision process. In this section, we review the estimation of policy gradients and describe the difference between control- and parameter-based exploration. 2.1 Markov Decision Process Notation At each discrete time t, the agent observes state st ? S, selects action at ? A, and then receives an instantaneous reward rt ? < resulting from a state transition in the environment. The state S and the action A are both defined as continuous spaces in this paper. The next state st+1 is chosen according to the transition probability pT (st+1 |st , at ), and the reward rt is given randomly according to the expectation R(st , at ). The agent does not know pT (st+1 |st , at ) and R(st , at ) in advance. The objective of the reinforcement learning agent is to construct a policy that maximizes the agent?s performance. A parameterized policy ?(a|s, ?) is defined as a probability distribution over an action space under a given state with parameters ?. We assume that each ? ? <d has a unique well-defined stationary distribution pD (s|?). Under this assumption, a natural performance measure for infinite horizon tasks is the average reward Z Z ?(?) = pD (s|?) ?(a|s, ?)R(s, a)dads. S 2.2 A Policy Gradients Policy gradient methods update policies by estimating the gradient of the average reward w.r.t. the P? policy parameters. The state-action value is Q? (s, a) = E[ t=1 rt ? ?(?)|s1 = s, a1 = a, ?], and it is assumed that ?(a|s, ?) is differentiable w.r.t. ?. The exact gradient of the average reward (see [20]) is given by Z Z ?? ?(?) = pD (s|?) ?(a|s, ?)?? log ?(a|s, ?)Q? (s, a)dads. (1) S A The natural gradient [2] has a basis in information geometry, which studies the Riemannian geometric structure of the manifold of probability distributions. A result in information geometry states that the FIM defines a Riemannian metric tensor on the space of probability distributions [3] and that the direction of the steepest descent on a Riemannian manifold is given by the natural gradient, given by the conventional gradient premultiplied by the inverse matrix of the Riemannian metric tensor [2]. Thus, the natural gradient can be computed from the gradient and the FIM, and it tends to converge faster than the conventional gradient. Kakade [9] applied the natural gradient to policy search; this was called as the natural policy gra? ? ?(?) ? F?1 ?? ?(?) is given by the dient. If the FIM is invertible, the natural policy gradient ? ? policy gradient premultiplied by the inverse matrix of the FIM F? . In this paper, we employ the FIM proposed by Kakade [9], defined as Z Z F? = pD (s|?) ?(a|s, ?)?? log ?(a|s, ?)?? log ?(a|s, ?)T dads. S A 2 Figure 1: Illustration of the main difference between control-based exploration and parameter-based exploration. The controller ?(u|s, w) is represented by a single-layer perceptron. While the controlbased exploration strategy (left) perturbs the resulting control signal, the parameter-based exploration strategy (right) perturbs the parameters of the controller. 2.3 Learning from Samples The calculation of (1) requires knowledge of the underlying transition probabilities pD (s|?). The GPOMDP algorithm [5] instead computes a Monte Carlo approximation of (1): the agent interacts with the environment, producing an observation, action, and reward sequence {s1 , a1 , r1 , s2 , ..., sT , aT , rT }. Under mild technical assumptions, the policy gradient approximation is T 1X ?? ?(?) ? rt zt , T t=1 where zt = ?zt?1 + ?? log ?(at |st , ?) is called the eligibility trace [12], ?? log ?(at |st , ?) is called the characteristic eligibility [22], and ? denotes the discount factor (0 ? ? < 1). As ? ? 1, the estimation approaches the true gradient 1 , but the variance increases (? is set to 0.9 in all ? ? log ?(at |st , ?) ? F?1 ?? log ?(at |st , ?). Therefore, the natural policy experiments). We define ? ? gradient approximation is T T X 1X ? ? ?(?) ? 1 ?t , ? F?1 r z = rt z t t T t=1 ? T t=1 (2) ? ? log ?(at |st , ?). To estimate the natural policy gradient, the heuristic sug?t = ?? where z zt?1 + ? gested by Kakade [9] used 1 1 F?,t = (1 ? )F?,t?1 + (?? log ?(at |st , ?)?? log ?(at |st , ?)T + ?I), (3) t t the online estimate of the FIM, where ? is a small positive constant. 2.4 Parameter-based Exploration In most control tasks, we attempt to have a (deterministic or stochastic) controller ?(u|s, w) and an exploration strategy, where u ? U ? <m denotes control and w ? W ? <n , the parameters of the controller. The objective of learning is to seek suitable values of the parameters w, and the exploration strategy is required to carry out stochastic sampling near the current parameters. A typical exploration strategy model, we call control-based exploration, would be a normal distribution for the control space (Figure1 (left)). In this case, the action of the agent is control, and the policy is represented by ? ? 1 1 T ?1 exp ? (u ? ?(s, w)) ? (u ? ?(s, w)) : S ? U, ?U (u|s, ?) = 2 (2?)m/2 |?|1/2 where ? is the m ? m covariance matrix and the agent seeks ? = hw, ?i. The control at time t is generated by ? t = ?(st , w), u ut ? N (? ut , ?). 1 [5] showed that the approximation error is proportional to (1??)/(1?|?2 |), where ?2 is the sub-dominant eigenvalue of the Markov chain 3 One useful feature of such a Gaussian unit [22] is that the agent can potentially control its degree of exploratory behavior. The control-based exploration strategy samples near the output of the controller. However, the structures of the parameter space and the control space are not always identical. Therefore, the sampling strategy generates controls that are not likely to be generated from the current controller, even if the exploration variances decrease. This property leads to large variance gradient estimates. This might be one reason why the policy improvement gets stuck. To address this issue, Sehnke et al. [18] introduced a different exploration strategy for policy gradient methods called policy gradients with parameter-based exploration (PGPE). In this approach, the action of the agent is the parameters of the controller, and the policy is represented by ? ? 1 1 ? ?1 (w ? ? w)T ? ? ?) = ? ? w) : S ? W, ?W (w|s, exp ? (w ? 1/2 2 (2?)n/2 |?| ? is the n ? n covariance matrix and the agent seeks ? = hw, ?i. ? The controller is included where ? in the dynamics of the environment, and the control at time t is generated by ? ? t ? N (w, ?), w ? t ). ut = ?(st , w GPOMDP-based methods can estimate policy gradients such as partially observable settings, i.e., the ? ?) excludes the observation of the current state. Because this exploration strategy policy ?W (w|s, directly perturbs the parameters (Figure1 (right)), the samples are generated near the current parameters under small exploration variances. Note that the advantage of this framework is that because the gradient is estimated directly by sampling the parameters of the controller, the implementation ? of the policy gradient algorithms does not require ?? ?, which is difficult to derive from complex controllers. Sehnke et al. [18] demonstrated that PGPE can yield faster convergence than the control-based exploration strategy in several challenging episodic tasks. However, the parameter-based exploration tends to have a higher dimension than the control-based one. Therefore, because of the computational cost of the inverse of F? calculated by (3), natural policy gradients find limited applications. 3 Natural Policy Gradients with Parameter-based Exploration In this section, we propose a new algorithm called natural policy gradients with parameter-based exploration (NPGPE) for the efficient estimation of the natural policy gradient. 3.1 Implementation of Gaussian-based Exploration Strategy ? We employ the policy representation model ?(w|?), a multivariate normal distribution with parameters ? = hw, Ci, where w represents the mean and C, the Cholesky decomposition of the covariance ? such that C is an n ? n upper triangular matrix and ? ? = CT C. Sun et al. [19] noted matrix ? two advantages of this implementation: C makes explicit the n(n + 1)/2 independent parameters ? in addition, the diagonal elements of C are the square roots of determining the covariance matrix ?; ? and therefore, CT C is always positive semidefinite. In the remainder of the the eigenvalues of ?, text, we consider ? to be an [n(n + 3)/2]-dimensional column vector consisting of the elements of w and the upper-right elements of C, i.e., ? = [wT , (C1:n,1 )T , (C2:n,2 )T , ..., (Cn:n,n )T ]T . Here, Ck:n,k is the sub-matrix in C at row k to n and column k. 3.2 Inverse of Fisher Information Matrix Previous natural policy gradient methods [9] use the empirical FIM, which is estimated from a ? sample path. Such methods are highly inefficient for ?(w|?) to invert the empirical FIM, a matrix with O(n4 ) elements. We avoid this problem by directly computing the exact FIM. 4 Algorithm 1 Natural Policy Gradient Method with Parameter-based Exploration Require: ? = hw, Ci: policy parameters, ?(u|s, w): controller, ?: step size, ?: discount rate, b: baseline. ?0 = 0, observe s1 . 1: Initialize z 2: for t = 1, ... do ? t = CT ?t + w. 3: Draw ?t ? N (0, I), compute action w ? t ), obtain observation st+1 and reward rt . 4: Execute ut ? ?(ut |st , w ? w log ?(w ? C log ?(w ? t |?) = w ? t ? w, ? ? t |?) = {triu(?t ?tT ) ? 12 diag(?t ?tT ) ? 12 I}C 5: ? ? ? log ?(w ?t = ?? ? t |?) 6: z zt?1 + ? 7: ? ? ? + ?(rt ? b)? zt 8: end for ? Substituting ? = ?(w|?) into (1), we can rewrite the policy gradient to obtain Z Z ? ? ? wds. ? ?? ?(?) = pD (s|?) ?(w|?)? ? log ?(w|?)Q ? (s, w)d S W Furthermore, the FIM of this distribution is Z Z ? ? ? T dwds ? F? = pD (s|?) ?(w|?)? ? log ?(w|?)? ? log ?(w|?) S W Z ? ? ? T dw. ? = ?(w|?)? ? log ?(w|?)? ? log ?(w|?) W Because F? is independent of pD (s|?), we can use the real FIM. Sun et al. [19] proved that the precise FIM of the Gaussian distribution N (w, CT C) becomes a ? ?1 and whose k-th block-diagonal matrix diag(F0 , ..., Fn ) whose first block F0 is identical to ? (1 ? k ? n) block Fk is given by ? ? ?2 c 0 ? ?1 +? Fk = k,k k:n,k:n 0 0 ? ? ? ? 0 , = [0 Ik? ] C?1 vk vkT + I C?T Ik? where vk denotes an n-dimensional column vector of which the only nonzero element is the k-th element that is one, and Ik? is the [n ? k + 1]-dimensional identity matrix. Further, Akimoto et al. [1] derived the inverse matrix of the k-th diagonal block Fk of the FIM. Because F? is a block-diagonal matrix and C is upper triangular, it is easy to verify that the inverse matrix of the FIM is ? ?? ? ? ? 1 0 0 0 T T I F?1 = [0 ] C ? C , v v + ? k k k k Ik? 0 Ik? 2 where we use vkT C 3.3 ? ? ? ? 0 0 0 0 ?1 T C = vk and [0 Ik? ] C C?1 = [0 Ik? ] . 0 Ik? 0 Ik? (4) Natural Policy Gradient ? ? log ?(w ? t |?) = Now, we derive the eligibility premultiplied by the inverse matrix of the FIM ? ? F?1 ? log ?( w |?) in the same manner as [1]. The characteristic eligibility w.r.t. w is given by ? t ? ? ?1 (w ? t |?) = ? ? t ? w). ?w log ?(w ? ? ? t |?) = F?1 ? t |?) = w ? t ? w. The characteristic Obviously, F?1 0 = ? and ?w log ?(w 0 ?w log ?(w eligibility w.r.t. C is given by ? ? ? ? t |?) = viT triu(Yt C?T ) ? diag(C?1 ) vj , log ?(w ?ci,j 5 -0.5 NPG(u) NPG(w) VPG(w) -1 VPG(u) -1.5 -2 10 3 4 10 step 10 5 10 6 4 3 2 1 0 -1 -2 -3 -4 1 mean paramter-based control-based paramter empirical optimum control mean return 0 0.5 paramter-based 0 -0.5 -1 -1.5 mean control-based -2 -4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 state 0 1 2 3 4 state Figure 2: Performance of NPG(w) as compared to that of NPG(u), VPG(w), and VPG(u) in the linear quadratic regulation task averaged over 100 trials. Left: The empirical optimum denotes the mean return under the optimum gain. Center and Right: Illustration of the main difference between control- and parameter-based exploration. The sampling area of 1? in the state-control space (center) and the state-parameter space (right) is plotted. where triu(Yt C?T ) denotes the upper triangular matrix whose (i, j) element is identical to the ? t ? w)(w ? t ? w)T C?1 is (i, j) element of Yt C?T if i ? j and zero otherwise, and Yt = C?T (w a symmetric matrix. Let ck = (ck,k , ..., ck,n )T (of dimension n + 1 ? k); then, the characteristic eligibility w.r.t. ck is expressed as ? ? ? t |?) = [0 Ik? ] C?1 Yt ? diag(C?1 ) vk . ?ck log ?(w According to (4), diag(C?1 )vk = c?1 k,k vk and vkT Cvk = ck,k and ? 0 0 ? 0 Cvk = ck,k vk , Ik? ? t |?) is therefore the k-th block of F?1 ? ?? log ?(w ?1 ? ? t |?) = Fk ?ck log ?(w ? t |?) ?ck log ?(w ? ? ?? ? ? ? 1 0 0 ? ?1 0 0 C Yt ? diag(C?1 ) vk = [0 Ik? ] CT ? vk vkT + C 0 Ik? 0 Ik? 2 ? ?? ? 1 0 0 = [0 Ik? ] CT ? vk vkT + (Yt ? I)vk . 0 Ik? 2 ? ? T ? ? c log ?(w ? ? , we obtain ? log ?( w |?) Because ? |?) = C t t k k,k:n ? ? 1 1 ? ? t |?) = triu(Yt ) ? diag(Yt ) ? I C. ?C log ?(w (5) 2 2 Therefore, the time complexity of computing ? ? log ?(w ? w log ?(w ? c log ?(w ? c log ?(w ? t |?) = [? ? t |?)T , ? ? t |?)T , ..., ? ? t |?)T ]T ? 1 n 3 ? t |?). This is a significant imis O(n ), which is of the same order as the computation of ?? log ?(w provement over the current natural policy gradient estimation using (2) and (3) with parameter-based exploration, whose complexity is O(n6 ). Note that more simple forms for exploration distribution could be used. When we use the exploration strategy that is represented as an independent normal distribution for each parameter wi in w, the natural policy gradient is estimated in O(n) time. This limited form ignores the relationship between parameters, but it is practical for high-dimensional controllers. 3.4 An Algorithm ? For a parameterized class of controllers ?(u|s, w), we can use the exploration strategy ?(w|?). An online version based on the GPOMDP algorithm of this implementation is shown in Algorithm 1. In ? t are generated by w ? t = CT ?t + w, where ?t ? N (0, I) practice, the parameters of the controller w ? t ?w)(w ? t ?w)T C?1 = ?t ?tT . are normal random numbers. Now, we can instead use Yt = C?T (w To reduce the variance of the gradient estimation, we employ variance reduction techniques [6] to adapt the reinforcement baseline b. 6 Figure 3: Simulator of a two-link arm robot. 4 Experiments In this section, we evaluate the performance of our proposed NPGPE method. The efficiency of parameter-based exploration has been reported for episodic tasks [18]. We compare parameter- and control-based exploration strategies with natural gradient and conventional ?vanilla? gradients using a simple continuing task as an example of a linear control problem. We also demonstrate NPGPE?s usefulness for a physically realistic locomotion task using a two-link arm robot simulator. 4.1 Implementation We compare two different exploration strategies. The first is the parameter-based exploration strat? egy ?(w|?) presented in Section 3.1. The second is the control-based exploration strategy ?(u|? u, D) ? is the mean vector of the control represented by a normal distribution for a control space, where u generated by controller ? and D represents the Cholesky decomposition of the covariance matrix ? such that D is an m ? m upper triangular matrix and ? = DT D. The parameters of the policy ?U (u|s, ?) are ? = hw, Di to be an [n + m(m + 1)/2]-dimensional column vector consisting of the elements of w and the upper-right elements of D. 4.2 Linear Quadratic Regulator The following linear control problem can serve as a benchmark of delayed reinforcement tasks [10]. The dynamics of the environment is st+1 = st + ut + ?, 1 1 where s ? < , u ? < , and ? ? N (0, 0.52 ). The immediate reward is given by rt = ?s2t ? u2t . In this experiment, the set of possible states is constrained to lie in the range [-4, 4], and st is truncated. When the agent chooses an action that does not lie in the range [?4, 4], the action executed in the environment is also truncated. The controller is represented by ?(u|s, w) = s ? w, where w ? <1 . p ? 2 The optimal parameter is given by w = 2/(1 + 2? + 4? + 1) ? 1 from the Riccati equation. For clarification, we now write an NPG that employs the natural policy gradient and a VPG that employs the ?vanilla? policy gradient. Therefore, NPG(w) and VPG(w) denote the use of the parameterbased exploration strategy, and NPG(u) and VPG(u) denote the use of the control-based exploration strategy. Our proposed NPGPE method is NPG(w). Figure2 (left) shows the performance of all compared methods. We can see that the algorithm using parameter-based exploration had better performance than that using control-based exploration in the continuing task. The natural policy gradient also improved the convergence speed, and a combination with parameter-based exploration outperformed all other methods. The reason for the acceleration in learning in this case may be the fact that the samples generated by the parameter-based exploration strategy allow effective search. Figure2 (center and right) show plots of the sampling area in the state-control space and the state-parameter space, respectively. Because control-based exploration maintains the sampling area in the control space, the sampling is almost uniform in the parameter space at around s = 0, where the agent visits frequently. Therefore, the parameter-based exploration may realize more efficient sampling than the control-based exploration. 4.3 Locomotion Task on a Two-link Arm Robot We applied the algorithm to the robot shown in Figure3 of Kimura et al. [11]. The objective of learning is to find control rules to move forward. The joints are controlled by servo motors that react 7 3 2 NPG(w) NPG(u) NAC(u) 1 0 104 105 106 step 107 1 0.5 0 -0.5 -1 2 10 1 10 0 10 102 weight weight 4 gain mean return 5 gain 6 103 104 105 step 106 107 1 0.5 0 -0.5 -1 2 10 1 10 0 10 102 103 104 105 step 106 107 Figure 4: Performance of NPG(w) as compared to that of NPG(u) and NAC(u) in the locomotion task averaged over 100 trials. Left: Mean performance of all compared methods. Center: Parameters of controller for NPG(w). Right: Parameters of controller for NPG(u). The parameters of the qP controller are normalized by gain i = j wi,j and weight i,j = wi,j /gain i , where wi,j denotes the j-th parameter of the i-th joint. Arrows in the center and right denote the changing points of the relation between two important parameters. to angular-position commands. At each time step, the agent observes the angular position of two motors, where each observation o1 , o2 is normalized to [0, 1], and selects an action. The immediate reward is the distance of the body movement caused by the previous action. When the robot moves T backward, the agent receives a negative reward. The state vector P is expressed as s = [o1 , o2 , 1] . The control for motor i is generated by ui = 1/(1 + exp(? j sj wi,j )). The dimension of the parameters of the policies is dW = n(n + 3)/2 = 27 and dU = n + m(m + 1)/2 = 9 for the parameter- and control-based exploration strategy, respectively. We compared NPG(w), i.e., NPGPE, with NPG(u) and NAC(u). NAC is the state-of-the-art policy gradient algorithm [15] that combines natural policy gradients, actor-critic framework, and leastsquares temporal-difference Q-learning. NAC computes the inverse of a d ? d matrix to estimate the natural steepest ascent direction. Because NAC(w) has O(d3W ) time complexity for each iteration, which is prohibitively expensive, we apply NAC to only control-based exploration. Figure4 (left) shows our results. Initially, NPG(w) is outperformed by NAC(u); however, it then reaches good solutions with fewer steps. Furthermore, at a later stage, NAC(u) matches NPG(u). Figure4 (center and right) show the path of the relation between the parameters of the controller. NPG(w) is much slower than NPG(u) to adapt the relation at an early stage; however, it can seek the relations of important parameters (indicated by arrows in the figures) faster, whereas NPG(u) gets stuck because of inefficient sampling. 5 Conclusions This paper proposed a novel natural policy gradient method combined with parameter-based exploration to cope with high-dimensional reinforcement learning domains. The proposed algorithm, NPGPE, is very simple and quickly calculates the estimation of the natural policy gradient. Moreover, the experimental results demonstrate a significant improvement in the control domain. Future works will focus on developing actor-critic versions of NPGPE that might encourage performance improvements at an early stage, and on combining other gradient methods such as natural conjugate gradient methods [8]. In addition, a comparison with other direct parameter perturbation methods such as finite difference gradient methods [14], CMA-ES [7], and NES [19] will be necessary to gain a better understanding of the properties and efficacy of the combination of parameter-based exploration strategies and the natural policy gradient. Furthermore, the application of the algorithm to real-world problems is required to assess its utility. Acknowledgments This work was suported by the Japan Society for the Promotion of Science (22 9031). 8 References [1] Youhei Akimoto, Yuichi Nagata, Isao Ono, and Shigenobu Kobayashi. Bidirectional Relation between CMA Evolution Strategies and Natural Evolution Strategies. Parallel Problem Solving from Nature XI, pages 154?163, 2010. [2] S. Amari. Natural Gradient Works Efficiently in Learning. Neural Computation, 10(2):251? 276, 1998. [3] S. Amari and H. Nagaoka. Methods of Information Geometry. American Mathematical Society, 2007. [4] J. Andrew Bagnell and Jeff Schneider. Covariant policy search. In IJCAI?03: Proceedings of the 18th international joint conference on Artificial intelligence, pages 1019?1024, 2003. [5] Jonathan Baxter and Peter L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research, 15:319?350, 2001. [6] Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. The Journal of Machine Learning Research, 5:1471?1530, 2004. [7] V. Heidrich-Meisner and C. Igel. Variable metric reinforcement learning methods applied to the noisy mountain car problem. In EWRL 2008, pages 136?150, 2008. [8] Antti Honkela, Matti Tornio, Tapani Raiko, and Juha Karhunen. Natural conjugate gradient in variational inference. In ICONIP 2007, pages 305?314, 2008. [9] S. A. Kakade. A natural policy gradient. In In Advances in Neural Information Processing Systems, pages 1531?1538, 2001. [10] H. Kimura and S. Kobayashi. Reinforcement learning for continuous action using stochastic gradient ascent. In Intelligent Autonomous Systems (IAS-5), pages 288?295, 1998. [11] Hajime Kimura, Kazuteru Miyazaki, and Shigenobu Kobayashi. Reinforcement learning in pomdps with function approximation. In ICML ?97: Proceedings of the Fourteenth International Conference on Machine Learning, pages 152?160, 1997. [12] Hajime Kimura, Masayuki Yamamura, and Shigenobu Kobayashi. Reinforcement learning by stochastic hill climbing on discounted reward. In ICML, pages 295?303, 1995. [13] Jens Kober and Jan Peters. Policy search for motor primitives in robotics. In Advances in Neural Information Processing Systems 21, pages 849?856, 2009. [14] Jan Peters and Stefan Schaal. Policy Gradient Methods for Robotics. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2219?2225, 2006. [15] Jan Peters and Stefan Schaal. Natural actor-critic. Neurocomputing, 71(7?9):1180?1190, 2008. [16] Silvia Richter, Douglas Aberdeen, and Jin Yu. Natural actor-critic for road traffic optimisation. In Advances in Neural Information Processing Systems 19, pages 1169?1176. MIT Press, Cambridge, MA, 2007. [17] Thomas R?uckstie?, Martin Felder, and J?urgen Schmidhuber. State-dependent exploration for policy gradient methods. In ECML PKDD ?08: Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II, pages 234?249, 2008. [18] Frank Sehnke, C Osendorfer, T Rueckstiess, A. Graves, J. Peters, and J. Schmidhuber. Policy gradients with parameter-based exploration for control. In Proceedings of the International Conference on Artificial Neural Networks (ICANN), pages 387?396, 2008. [19] Yi Sun, Daan Wierstra, Tom Schaul, and Juergen Schmidhuber. Efficient natural evolution strategies. In GECCO ?09: Proceedings of the 11th Annual conference on Genetic and evolutionary computation, pages 539?546, 2009. [20] R. S. Sutton. Policy gradient method for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, volume 12, pages 1057?1063, 2000. [21] Daan Wierstra, Er Foerster, Jan Peters, and Juergen Schmidhuber. Solving deep memory pomdps with recurrent policy gradients. In In International Conference on Artificial Neural Networks, 2007. [22] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning, pages 229?256, 1992. 9
3987 |@word mild:1 trial:2 version:2 seek:5 covariance:5 decomposition:2 carry:1 reduction:2 efficacy:1 genetic:1 outperforms:2 o2:2 current:6 yet:1 realize:1 ronald:1 numerical:1 fn:1 realistic:2 motor:4 plot:1 update:2 sehnke:3 stationary:1 intelligence:3 prohibitive:1 fewer:1 parameterization:1 steepest:2 premultiplied:3 mathematical:1 wierstra:2 constructed:1 direct:2 c2:1 ik:16 s2t:1 combine:2 manner:1 expected:2 behavior:1 pkdd:1 frequently:1 simulator:2 discounted:1 becomes:1 estimating:3 notation:1 underlying:1 maximizes:1 moreover:1 miyazaki:1 mountain:1 kimura:4 temporal:1 fellow:1 prohibitively:2 control:45 unit:1 producing:1 greensmith:1 positive:2 kobayashi:5 tends:2 sutton:1 path:2 might:2 challenging:1 limited:2 range:2 averaged:2 igel:1 practical:2 unique:1 acknowledgment:1 practice:1 block:6 richter:1 jan:4 evan:1 area:3 episodic:3 empirical:4 pre:1 road:1 get:2 influence:1 conventional:7 deterministic:1 demonstrated:2 yt:10 center:6 primitive:1 williams:1 vit:1 react:1 rule:1 dw:2 handle:1 exploratory:1 autonomous:1 pt:2 exact:4 locomotion:3 element:10 expensive:2 database:1 sun:3 decrease:1 movement:1 observes:2 servo:1 intuition:1 environment:6 pd:8 complexity:3 ui:1 reward:12 dynamic:2 rewrite:1 solving:2 serve:1 akimoto:2 efficiency:1 basis:1 easily:1 joint:3 represented:7 train:1 describe:1 effective:1 monte:2 artificial:4 whose:4 heuristic:1 otherwise:1 isao:3 triangular:4 npg:21 amari:2 cma:2 nagaoka:1 noisy:1 online:2 obviously:1 advantage:3 differentiable:1 sequence:1 eigenvalue:2 propose:3 kober:1 remainder:1 combining:1 riccati:1 schaul:1 convergence:4 ijcai:1 optimum:3 r1:1 tornio:1 derive:3 andrew:1 ac:1 recurrent:1 direction:2 tokyo:1 stochastic:5 exploration:60 require:2 leastsquares:1 felder:1 around:1 normal:5 exp:3 substituting:1 early:2 estimation:7 outperformed:2 successfully:1 stefan:2 promotion:2 mit:1 gaussian:3 always:2 ewrl:1 ck:10 avoid:1 command:1 derived:1 focus:1 schaal:2 improvement:4 vk:11 baseline:2 inference:1 dient:1 dependent:1 initially:1 relation:5 selects:2 issue:1 figure4:2 art:2 constrained:1 initialize:1 urgen:1 equal:1 construct:1 sampling:10 identical:3 represents:2 yu:1 icml:2 osendorfer:1 future:1 connectionist:1 intelligent:2 employ:5 randomly:1 neurocomputing:1 delayed:1 geometry:3 consisting:2 attempt:1 highly:1 semidefinite:1 chain:1 encourage:1 necessary:1 continuing:3 masayuki:1 plotted:1 column:4 disadvantage:1 juergen:2 cost:5 uniform:1 usefulness:1 reported:1 yamamura:1 chooses:1 combined:1 st:23 international:5 invertible:1 quickly:1 american:1 inefficient:2 return:3 japan:3 caused:1 later:1 root:1 dad:3 traffic:1 nagata:3 maintains:1 parallel:1 ass:1 square:1 variance:8 characteristic:4 efficiently:2 yield:1 climbing:2 carlo:2 pomdps:2 reach:1 riemannian:4 di:1 gain:6 proved:1 knowledge:2 ut:6 car:1 hajime:2 bidirectional:1 higher:2 strat:1 dt:1 figure1:2 tom:1 improved:1 execute:1 furthermore:3 angular:2 stage:3 honkela:1 receives:2 defines:1 indicated:1 nac:10 verify:1 true:1 normalized:2 evolution:3 symmetric:1 nonzero:1 deal:1 eligibility:6 noted:1 sug:1 iconip:1 hill:1 tt:3 demonstrate:2 atsushi:1 variational:1 instantaneous:1 novel:1 recently:1 qp:1 perturbing:1 jp:1 volume:1 significant:3 cambridge:1 vanilla:2 fk:4 had:1 robot:6 f0:2 actor:4 heidrich:1 dominant:1 vpg:7 multivariate:1 showed:1 schmidhuber:4 jens:1 yi:1 tapani:1 schneider:1 converge:1 signal:2 shigenobu:4 ii:1 technical:1 faster:3 adapt:2 calculation:1 match:1 visit:1 a1:2 controlled:1 calculates:3 controller:25 optimisation:1 titech:1 expectation:1 metric:3 physically:1 iteration:1 foerster:1 invert:1 robotics:2 c1:1 whereas:2 addition:2 unlike:1 ascent:2 tend:1 gpomdp:3 call:1 near:3 easy:1 baxter:2 reduce:1 figure2:2 cn:1 motivated:1 fim:17 utility:1 bartlett:2 peter:7 action:14 deep:1 useful:1 discount:2 estimated:4 discrete:1 write:1 changing:1 douglas:1 backward:1 excludes:1 inverse:11 parameterized:3 fourteenth:1 gra:1 almost:1 draw:1 decision:2 layer:1 ct:7 quadratic:2 annual:1 generates:1 regulator:1 speed:1 martin:1 department:1 structured:1 according:3 developing:1 combination:3 conjugate:2 wi:5 kakade:4 n4:1 s1:3 equation:1 paramter:3 know:1 end:1 apply:2 observe:1 appropriate:1 pgpe:2 slower:1 thomas:1 denotes:6 perturb:1 especially:1 society:3 rsj:1 tensor:2 objective:3 move:2 strategy:34 parametric:1 rt:9 interacts:2 diagonal:4 bagnell:1 evolutionary:1 gradient:85 perturbs:3 link:3 distance:1 gecco:1 manifold:2 figure3:1 reason:2 o1:2 relationship:1 illustration:2 difficult:2 unfortunately:1 regulation:1 fe:2 potentially:1 executed:1 frank:1 trace:1 negative:1 implementation:6 zt:6 policy:80 unknown:1 upper:6 observation:4 markov:3 daan:2 vkt:5 benchmark:1 finite:1 descent:1 juha:1 jin:1 truncated:2 immediate:2 ecml:1 precise:1 perturbation:2 introduced:1 required:3 address:1 including:1 memory:1 ia:1 suitable:1 natural:48 cvk:2 arm:3 technology:1 ne:1 raiko:1 n6:1 speeding:1 text:1 review:1 geometric:1 understanding:1 discovery:1 determining:1 graf:1 proportional:1 agent:15 degree:1 critic:4 row:1 antti:1 dis:1 allow:1 perceptron:1 institute:1 dimension:4 calculated:1 transition:3 world:1 computes:2 ignores:1 stuck:2 forward:1 reinforcement:14 cope:1 sj:1 observable:1 assumed:1 xi:1 yuichi:2 search:7 continuous:3 why:1 nature:1 matti:1 kanagawa:1 parameterbased:1 du:1 complex:1 european:1 domain:4 diag:7 vj:1 u2t:1 icann:1 main:2 arrow:2 s2:1 silvia:1 body:1 slow:1 miyamae:2 sub:2 position:2 explicit:1 lie:2 meisner:1 uckstie:1 hw:5 ono:2 er:1 ci:3 karhunen:1 horizon:2 egy:1 aberdeen:1 likely:1 expressed:2 partially:1 covariant:1 ma:1 identity:1 acceleration:1 jeff:1 replace:1 fisher:3 change:1 included:1 infinite:2 typical:1 wt:1 called:7 clarification:1 experimental:2 e:1 cholesky:2 jonathan:2 evaluate:1
3,298
3,988
Efficient and Robust Feature Selection via Joint `2,1-Norms Minimization Feiping Nie Computer Science and Engineering University of Texas at Arlington [email protected] Heng Huang Computer Science and Engineering University of Texas at Arlington [email protected] Xiao Cai Computer Science and Engineering University of Texas at Arlington [email protected] Chris Ding Computer Science and Engineering University of Texas at Arlington [email protected] Abstract Feature selection is an important component of many machine learning applications. Especially in many bioinformatics tasks, efficient and robust feature selection methods are desired to extract meaningful features and eliminate noisy ones. In this paper, we propose a new robust feature selection method with emphasizing joint `2,1 -norm minimization on both loss function and regularization. The `2,1 -norm based loss function is robust to outliers in data points and the `2,1 norm regularization selects features across all data points with joint sparsity. An efficient algorithm is introduced with proved convergence. Our regression based objective makes the feature selection process more efficient. Our method has been applied into both genomic and proteomic biomarkers discovery. Extensive empirical studies are performed on six data sets to demonstrate the performance of our feature selection method. 1 Introduction Feature selection, the process of selecting a subset of relevant features, is a key component in building robust machine learning models for classification, clustering, and other tasks. Feature section has been playing an important role in many applications since it can speed up the learning process, improve the mode generalization capability, and alleviate the effect of the curse of dimensionality [15]. A large number of developments on feature selection have been made in the literature and there are many recent reviews and workshops devoted to this topic, e.g., NIPS Conference [7]. In past ten years, feature selection has seen much activities primarily due to the advances in bioinformatics where a large amount of genomic and proteomic data are produced for biological and biomedical studies. For example, in genomics, DNA microarray data measure the expression levels of thousands of genes in a single experiment. Gene expression data usually contain a large number of genes, but a small number of samples. A given disease or a biological function is usually associated with a few genes [19]. Out of several thousands of genes to select a few of relevant genes thus becomes a key problem in bioinformatics research [22]. In proteomics, high-throughput mass spectrometry (MS) screening measures the molecular weights of individual biomolecules (such as proteins and nucleic acids) and has potential to discover putative proteomic biomarkers. Each spectrum is composed of peak amplitude measurements at approximately 15,500 features represented by a corresponding mass-to-charge value. The identification of meaningful proteomic features from MS is crucial for disease diagnosis and protein-based biomarker profiling [22]. 1 In general, there are three models of feature selection methods in the literature: (1) filter methods [14] where the selection is independent of classifiers, (2) wrapper methods [12] where the prediction method is used as a black box to score subsets of features, and (3) embedded methods where the procedure of feature selection is embedded directly in the training process. In bioinformatics applications, many feature selection methods from these categories have been proposed and applied. Widely used filter-type feature selection methods include F -statistic [4], reliefF [11, 13], mRMR [19], t-test, and information gain [21] which compute the sensitivity (correlation or relevance) of a feature with respect to (w.r.t) the class label distribution of the data. These methods can be characterized by using global statistical information. Wrapper-type feature selection methods is tightly coupled with a specific classifier, such as correlation-based feature selection (CFS) [9], support vector machine recursive feature elimination (SVM-RFE) [8]. They often have good performance, but their computational cost is very expensive. Recently sparsity regularization in dimensionality reduction has been widely investigated and also applied into feature selection studies. `1 -SVM was proposed to perform feature selection using the `1 -norm regularization that tends to give sparse solution [3]. Because the number of selected features using `1 -SVM is upper bounded by the sample size, a Hybrid Huberized SVM (HHSVM) was proposed combining both `1 -norm and `2 -norm to form a more structured regularization [26]. But it was designed only for binary classification. In multi-task learning, in parallel works, Obozinsky et. al. [18] and Argyriou et. al. [1] have developed a similar model for `2,1 -norm regularization to couple feature selection across tasks. Such regularization has close connections to group lasso [28]. In this paper, we propose a novel efficient and robust feature selection method to employ joint `2,1 norm minimization on both loss function and regularization. Instead of using `2 -norm based loss function that is sensitive to outliers, a `2,1 -norm based loss function is adopted in our work to remove outliers. Motivated by previous research [1, 18], a `2,1 -norm regularization is performed to select features across all data points with joint sparsity, i.e. each feature (gene expression or mass-to-charge value in MS) either has small scores for all data points or has large scores over all data points. To solve this new robust feature selection objective, we propose an efficient algorithm to solve such joint `2,1 -norm minimization problem. We also provide the algorithm analysis and prove the convergence of our algorithm. Extensive experiments have been performed on six bioinformatics data sets and our method outperforms five other commonly used feature selection methods in statistical learning and bioinformatics. 2 Notations and Definitions We summarize the notations and the definition of norms used in this paper. Matrices are written as boldface uppercase letters. Vectors are written as boldface lowercase letters. For matrix M = (mij ), its i-th row, j-th column are denoted by mi , mj respectively. ? n ? p1 P p n The `p -norm of the vector v ? R is defined as kvkp = |vi | . The `0 -norm of the vector n v ? R is defined as kvk0 = n P i=1 i=1 0 |vi | . The Frobenius norm of the matrix M ? Rn?m is defined as v v u n uX m uX u n X 2 2 t kMkF = kmi k2 . mij = t i=1 j=1 (1) i=1 The `2,1 -norm of a matrix was first introduced in [5] as rotational invariant `1 norm and also used for multi-task learning [1, 18] and tensor factorization [10]. It is defined as kMk2,1 v n n uX X X ? i? um 2 ?m ? , t = mij = 2 i=1 j=1 2 i=1 (2) which is rotational invariant for rows: kMRk2,1 = kMk2,1 for any rotational matrix R. The `2,1 -norm can be generalized to `r,p -norm ? ? ? pr ? p1 ? n !1 m n X X X ? ?p p ? ? r ?mi ? ? kMkr,p = ? |mij | ? ? = . (3) r j=1 i=1 i=1 Note that `r,p -norm is a valid norm because it satisfies the three norm conditions, including the triangle inequality kAkr,p + kBkr,p ? kA + Bkr,p . This can be proved as follows. Starting from 1 1 1 P P P the triangle inequality ( i |ui |p ) p + ( i |vi |p ) p ? ( i |ui + vi |p ) p and setting ui = kai kr and vi = kbi kr , we obtain ? ! p1 ? ! p1 ? ! p1 ? ! p1 X X X X i p i p i i p i i p ka kr + kb kr ? | ka kr + kb kr | ? | ka + b kr | , (4) i i i i where the second inequality follows the triangle inequality for `r norm: kai kr +kbi kr ? kai +bi kr . Eq. (4) is just kAkr,p + kBkr,p ? kA + Bkr,p . However, the `0 -norm is not a valid norm because it does not satisfy the positive scalability: k?vk0 = |?|kvk0 for scalar ?. The term ?norm? here is for convenience. 3 Robust Feature Selection Based on `2,1 -Norms Least square regression is one of the popular methods for classification. Given training data {x1 , x2 , ? ? ? , xn } ? Rd and the associated class labels {y1 , y2 , ? ? ? , yn } ? Rc , traditional least square regression solves the following optimization problem to obtain the projection matrix W ? Rd?c and the bias b ? Rc : n X ? T ? ?W xi + b ? yi ?2 . min 2 W,b (5) i=1 For simplicity, the bias b can be absorbed into W when the constant value 1 is added as an additional dimension for each data xi (1 ? i ? n). Thus the problem becomes: min W n X ? T ? ?W xi ? yi ?2 . 2 (6) i=1 In this paper, we use the robust loss function: min W n X ? T ? ?W xi ? yi ? , 2 (7) i=1 where the residual kWT xi ? yi k is not squared and thus outliers have less importance than the squared residual kWT xi ? yi k2 . This loss function has a rotational invariant property while the pure `1 -norm loss function does not has such desirable property [5]. We now add a regularization term R(W) with parameter ?. The problem becomes: min W n X ? T ? ?W xi ? yi ? + ?R(W). 2 (8) i=1 Several regularizations are possible: R1 (W) = kWk2 , R2 (W) = c X kwj k1 , R3 (W) = j=1 d d X X ? i ?0 ? i? ?w ? , R4 (W) = ?w ? . 2 2 i=1 (9) i=1 R1 (W) is the ridge regularization. R2 (W) is the LASSO regularization. R3 (W) and R4 (W) penalizes all c regression coefficients corresponding to a single feature as a whole. This has the 3 effects of feature selection. Although the `0 -norm of R3 (W) is the most desirable [16], in this paper, we use R4 (W) instead. The reasons are: (A) the `1 -norm of R4 (W) is convex and can be easily optimized (the main contribution of this paper); (B) it was shown that results of `0 -norm is identical or approximately identical to the `1 -norm results under practical conditions. Denote data matrix X = [x1 , x2 , ? ? ? , xn ] ? Rd?n and label matrix Y = [y1 , y2 , ? ? ? , yn ]T ? Rn?c . In this paper, we optimize min J(W) = W n X ? T ? ? ? ?W xi ? yi ? + ?R4 (W) = ?XT W ? Y? + ? kWk . 2,1 2 2,1 (10) i=1 It seems that solving this joint `2,1 -norm problem is difficult as both of the terms are non-smooth. Surprisingly, we will show in the next section that the problem can be solved using a simple yet efficient algorithm. 4 4.1 An Efficient Algorithm Reformulation as A Constrained Problem First, the problem in Eq. (10) is equivalent to ? 1? min ?XT W ? Y?2,1 + kWk2,1 , W ? which is further equivalent to min kEk2,1 + kWk2,1 W,E Rewriting the above problem as ?? ?? ? W ? ? ? min E ? W,E ? XT W + ?E = Y. s.t. ? s.t. (11) X T ?I ? ? 2,1 W E where?I ? R?n?n is an identity matrix. Denote m = n + d. Let A = W U= ? Rm?c , then the problem in Eq. (13) can be written as: E min kUk2,1 U s.t. AU = Y (12) ? = Y, ? XT (13) ?I ? ? Rn?m and (14) This optimization problem Eq. (14) has been widely used in the Multiple Measurement Vector (MMV) model in signal processing community. It was generally felt that the `2,1 -norm minimization problem is much more difficult to solve than the `1 -norm minimization problem. Existing algorithms usually reformulate it as a second-order cone programming (SOCP) or semidefinite programming (SDP) problem, which can be solved by interior point method or the bundle method. However, solving SOCP or SDP is computationally very expensive, which limits their use in practice. Recently, an efficient algorithm was proposed to solve the specific problem Eq. (14) by complicatedly reformulating the problem as a min-max problem and then applying the proximal method to solve it [25]. The reported results show that the algorithm is more efficient than existing algorithms. However, the algorithm is a gradient descent type method and converges very slow. Moreover, the algorithm is derived to solve the specific problem, and can not be applied directly to solve other general `2,1 -norm minimization problem. In the next subsection, we will propose a very simple but at the same time much more efficient method to solve this problem. Theoretical analysis guarantees that the proposed method will converge to the global optimum. More importantly, this method is very easy to implement and can be readily used to solve other general `2,1 -norm minimization problem. 4.2 An Efficient Algorithm to Solve the Constrained Problem The Lagrangian function of the problem in Eq. (14) is L(U) = kUk2,1 ? T r(?T (AU ? Y)). 4 (15) Taking the derivative of L(U) w.r.t U, and setting the derivative to zero, we have: ?L(U) = 2DU ? AT ? = 0, ?U where D is a diagonal matrix with the i-th diagonal element as1 1 dii = . 2 kui k2 (16) (17) Left multiplying the two sides of Eq. (16) by AD?1 , and using the constraint AU = Y, we have: 2AU ? AD?1 AT ? = 0 ? 2Y ? AD?1 AT ? = 0 ? ? = 2(AD?1 AT )?1 Y Substitute Eq. (18) into Eq. (16), we arrive at: (18) U = D?1 AT (AD?1 AT )?1 Y. (19) Since the problem in Eq. (14) is a convex problem, U is a global optimum solution to the problem if and only if the Eq. (19) is satisfied. Note that D is dependent to U and thus is also a unknown variable. We propose an iterative algorithm in this paper to obtain the solution U such that Eq. (19) is satisfied, and prove in the next subsection that the proposed iterative algorithm will converge to the global optimum. The algorithm is described in Algorithm 1. In each iteration, U is calculated with the current D, and then D is updated based on the current calculated U. The iteration procedure is repeated until the algorithm converges. Data: A ? Rn?m , Y ? Rn?c Result: U ? Rm?c Set t = 0. Initialize Dt ? Rm?m as an identity matrix repeat ?1 T ?1 T Calculate Ut+1 = D?1 Y. t A (ADt A ) Calculate the diagonal matrix Dt+1 , where the i-th diagonal element is 1 2kuit+1 k . 2 t = t + 1. until Converges Algorithm 1: An efficient iterative algorithm to solve the optimization problem in Eq. (14). 4.3 Algorithm Analysis The Algorithm 1 monotonically decreases the objective of the problem in Eq. (14) in each iteration. To prove it, we need the following lemma: Lemma 1. For any nonzero vectors u, ut ? Rc , the following inequality holds: 2 kuk2 ? 2 kuk2 kut k2 ? kut k2 ? . 2 kut k2 2 kut k2 (20) ? ? Proof. Beginning with an obvious inequality ( v ? vt )2 ? 0, we have ? ? ? ? v ( v ? vt )2 ? 0 ? v ? 2 vvt + vt ? 0 ? v ? ? ? 2 vt ? ? ? vt v vt ? v ? ? ? vt ? ? (21) 2 2 vt 2 vt 2 2 Substitute the v and vt in Eq. (21) by kuk2 and kut k2 respectively, we arrive at the Eq. (20). 1 When ui = 0, then dii = 0 is a subgradient of kUk2,1 w.r.t. ui . However, we can not set dii = 0 when u = 0, otherwise the derived algorithm can not be guaranteed to converge. Two methods can be used to solve this problem.?First, D?1 , so we can let the i-th element ? we will see from Eq.(19) that we only need to calculate ?1 i? 1 ? of D as 2 u 2 . Second, we can regularize dii as dii = ? i T i , and the derived algorithm can be 2 (u ) u +? n p P proved to minimize the regularized `2,1 -norms of U (defined as (ui )T ui + ?) instead of the `2,1 -norms i i=1 of U. It is easy to see that the regularized `2,1 -norms of U approximates the `2,1 -norms of U when ? ? 0. 5 The convergence of the Algorithm 1 is summarized in the following theorem: Theorem 1. The Algorithm 1 will monotonically decrease the objective of the problem in Eq. (14) in each iteration, and converge to the global optimum of the problem. Proof. It can easily verified that Eq. (19) is the solution to the following problem: min T r(UT DU) s.t. AU = Y U Thus in the t iteration, Ut+1 = arg min T rUT Dt U, (23) T r(UTt+1 Dt Ut+1 ) ? T r(UTt Dt Ut ). (24) ? ?2 ? ? m ? i m ? i ?2 X X ut+1 ?2 u ? ? ? ? t ?2 , i ? ? ? 2 ut 2 uit ? (25) U AU=Y which indicates that That is to say, (22) i=1 2 i=1 2 where vectors uit and uit+1 denote the i-th row of matrices Ut and Ut+1 , respectively. On the other hand, according to Lemma 1, for each i we have ? i ?2 ? i ?2 ?ut+1 ? ?u ? ? i ? ? i? 2 ?ut+1 ? ? ? ? ? ?ut ? ? ? t ?2 . i 2 2 ? ? 2 ut 2 2 ?uit ?2 Thus the following inequality holds: ? ? ? i ?2 ! ? i ?2 ! m m ?ut+1 ? ?u ? X X ? i ? ? i? 2 ?ut+1 ? ? ? ? ?ut ? ? ? t ?2 . ? i 2 2 2 ?ut ? 2 ?uit ? 2 i=1 i=1 (26) (27) 2 Combining Eq. (25) and Eq. (27), we arrive at m m X X ? i ? ? i? ?ut+1 ? ? ?ut ? . 2 2 i=1 (28) i=1 That is to say, kUt+1 k2,1 ? kUt k2,1 . (29) Thus the Algorithm 1 will monotonically decrease the objective of the problem in Eq. (14) in each iteration t. In the convergence, Ut and Dt will satisfy the Eq. (19). As the problem in Eq. (14) is a convex problem, satisfying the Eq. (19) indicates that U is a global optimum solution to the problem in Eq. (14). Therefore, the Algorithm 1 will converge to the global optimum of the problem (14). Note that in each iteration, the Eq. (19) can be solved efficiently. First, D? is ?a diagonal matrix and ? i? thus D?1 is also diagonal with the i-th diagonal element as d?1 ii = 2 u 2 . Second, the term ?1 T ?1 Z = (AD A ) Y in Eq. (19) can be efficiently obtained by solving the linear equation: (AD?1 AT )Z = Y. (30) Empirical results show that the convergence is fast and only a few iterations are needed to converge. Therefore, the proposed method can be applied to large scale problem in practice. It is worth to point out that the proposed method can be easily extended to solve other `2,1 -norm minimization problem. For example, considering a general `2,1 -norm minimization problem as follows: X min f (U) + kAk U + Bk k2,1 s.t. U?C (31) U k The problem can be solved by solve the following problem iteratively: X min f (U) + T r((Ak U + Bk )T Dk (Ak U + Bk )) s.t. U U?C (32) k 1 . Similar theoretical where Dk is a diagonal matrix with the i-th diagonal element as 2k(Ak U+B i k ) k2 analysis can be used to prove that the iterative method will converge to a local minimum. If the problem Eq. (31) is a convex problem, i.e., f (U) is a convex function and C is a convex set, then the iterative method will converge to the global minimum. 6 100 80 75 95 95 85 80 ReliefF FscoreRank T?test Information gain mRMR RFS 75 70 0 10 20 30 40 50 60 the number of features selected 70 65 60 55 50 ReliefF FscoreRank T?test Information gain mRMR RFS 45 40 35 30 80 the classification accuracy the classification accuracy the classification accuracy 70 90 0 10 (a) ALLAML 20 30 40 50 60 the number of features selected 70 90 85 ReliefF FscoreRank T?test Information gain mRMR RFS 80 75 80 0 10 20 (b) GLIOMA 100 98 90 96 80 94 30 40 50 60 the number of features selected 70 80 (c) LUNG 100 60 50 40 ReliefF FscoreRank T?test Information gain mRMR RFS 30 20 10 0 10 20 30 40 50 60 the number of features selected 70 92 90 88 86 ReliefF FscoreRank T?test Information gain mRMR RFS 84 82 80 (d) Carcinomas the classification accuracy the classification accuracy the classification accuracy 95 70 80 0 10 20 30 40 50 60 the number of features selected (e) PROSTATE-GE 70 90 85 80 ReliefF FscoreRank T?test Information gain mRMR RFS 75 80 70 0 10 20 30 40 50 60 the number of features selected 70 80 (f) PROSTATE-MS Figure 1: Classification accuracy comparisons of six feature selection algorithms on 6 data sets. SVM with 5-fold cross validation is used for classification. RFS is our method. 5 Experimental Results In order to validate the performance of our feature selection method, we applied our method into two bioinformatics applications, gene expression and mass spectrometry classifications. In our experiments, we used five publicly available microarray data sets and one Mass Spectrometry (MS) data sets: ALLAML data set [6], the malignant glioma (GLIOMA) data set [17], the human lung carcinomas (LUNG) data set [2], Human Carcinomas (Carcinomas) data set [24, 27], Prostate Cancer gene expression (Prostate-GE) data set [23] for microarray data; and Prostate Cancer (Prostate-MS) [20] for MS data. The Support Vector Machine (SVM) classifier is employed to these data sets, using 5-fold cross-validation. 5.1 Data Sets Descriptions We give a brief description on all data sets used in our experiments as follows. ALLAML data set contains in total 72 samples in two classes, ALL and AML, which contain 47 and 25 samples, respectively. Every sample contains 7,129 gene expression values. GLIOMA data set contains in total 50 samples in four classes, cancer glioblastomas (CG), noncancer glioblastomas (NG), cancer oligodendrogliomas (CO) and non-cancer oligodendrogliomas (NO), which have 14, 14, 7,15 samples, respectively. Each sample has 12625 genes. Genes with minimal variations across the samples were removed. For this data set, intensity thresholds were set at 20 and 16,000 units. Genes whose expression levels varied < 100 units between samples, or varied < 3 fold between any two samples, were excluded. After preprocessing, we obtained a data set with 50 samples and 4433 genes. LUNG data set contains in total 203 samples in five classes, which have 139, 21, 20, 6,17 samples, respectively. Each sample has 12600 genes. The genes with standard deviations smaller than 50 expression units were removed and we obtained a data set with 203 samples and 3312 genes. Carcinomas data set composed of total 174 samples in eleven classes, prostate, bladder/ureter, breast, colorectal, gastroesophagus, kidney, liver, ovary, pancreas, lung adenocarcinomas, and lung squamous cell carcinoma, which have 26, 8, 26, 23, 12, 11, 7, 27, 6, 14, 14 samples, respectively. In the original data [24], each sample contains 12533 genes. In the preprocessed data set [27], there are 174 samples and 9182 genes. 7 Table 1: Classification Accuracy of SVM using 5-fold cross validation. Six feature selection methods are compared. RF: ReliefF, F-s: F-score, IG: Information Gain, and RFS: our method. Average accuracy of top 20 features (%) RF F-s T-test IG mRMR ALLAML 90.36 89.11 92.86 93.21 93.21 GLIOMA 50 50 56 60 62 LUNG 91.68 87.7 89.22 93.1 92.61 Carcinom. 79.88 65.48 49.9 85.09 78.22 Pro-GE 92.18 95.09 92.18 92.18 93.18 Pro-MS 76.41 98.89 95.56 98.89 95.42 Average 80.09 81.04 79.29 87.09 85.78 RFS 95.89 74 93.63 91.38 95.09 98.89 91.48 Average accuracy of top 80 features (%) RF F-s T-test IG mRMR 95.89 96.07 94.29 95.71 94.46 54 60 58 66 66 93.63 91.63 90.66 95.1 94.12 90.24 83.33 68.91 89.65 87.92 91.18 93.18 93.18 89.27 86.36 89.93 98.89 94.44 98.89 93.14 85.81 87.18 83.25 89.10 87 RFS 97.32 70 96.07 93.66 95.09 100 92.02 Prostate-GE data set has in total 102 samples in two classes tumor and normal, which have 52 and 50 samples, respectively. The original data set contains 12600 genes. In our experiment, intensity thresholds were set at 100 C16000 units. Then we filtered out the genes with max/min ? 5 or (max-min) ? 50. After preprocessing, we obtained a data set with 102 samples and 5966 genes. Prostate-MS data can be obtained from the FDA-NCI Clinical Proteomics Program Databank [20]. This MS data set consists of 190 samples diagnosed as benign prostate hyperplasia, 63 samples considered as no evidence of disease, and 69 samples diagnosed as prostate cancer. The samples diagnosed as benign prostate hyperplasia as well as samples having no evidence of prostate cancer were pooled into one set making 253 control samples, whereas the other 69 samples are the cancer samples. 5.2 Classification Accuracy Comparisons All data sets are standardized to be zero-mean and normalized by standard deviation. SVM classifier has been individually performed on all data sets using 5-fold cross-validation. We utilize the linear kernel with the parameter C = 1. We compare our feature selection method (called as RFS) to several popularly used feature selection methods in bioinformatics, such as F -statistic [4], reliefF [11, 13], mRMR [19], t-test, and information gain [21]. Because the above data sets are for multiclass classification problem, we don?t compare to `1 -SVM, HHSVM and other methods that were designed for binary classification. Fig. 1 shows the classification accuracy comparisons of all five feature selection methods on six data sets. Table 1 shows the detailed experimental results using SVM. We compute the average accuracy using the top 20 and top 80 features for all feature selection approaches. Obviously our approaches outperform other methods significantly. With top 20 features, our method is around 5%-12% better than other methods all six data sets. 6 Conclusions In this paper, we proposed a new efficient and robust feature selection method with emphasizing joint `2,1 -norm minimization on both loss function and regularization. The `2,1 -norm based regression loss function is robust to outliers in data points and also efficient in calculation. Motivated by previous work, the `2,1 -norm regularization is used to select features across all data points with joint sparsity. We provided an efficient algorithm with proved convergence. Our method has been applied into both genomic and proteomic biomarkers discovery. Extensive empirical studies have been performed on two bioinformatics tasks, six data sets, to demonstrate the performance of our method. 7 Acknowledgements This research was funded by US NSF-CCF-0830780, 0939187, 0917274, NSF DMS-0915228, NSF CNS-0923494, 1035913. 8 References [1] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. NIPS, pages 41?48, 2007. [2] A. Bhattacharjee, W. G. Richards, and et. al. Classification of human lung carcinomas by mRNA expression profiling reveals distinct adenocarcinoma subclasses. Proceedings of the National Academy of Sciences, 98(24):13790?13795, 2001. [3] P. Bradley and O. Mangasarian. Feature selection via concave minimization and support vector machines. ICML, 1998. [4] C. Ding and H. Peng. Minimum redundancy feature selection from microarray gene expression data. Proceedings of the Computational Systems Bioinformatics, 2003. [5] C. Ding, D. Zhou, X. He, and H. Zha. R1-PCA: Rotational invariant L1-norm principal component analysis for robust subspace factorization. Proc. Int?l Conf. Machine Learning (ICML), June 2006. [6] S. P. Fodor. DNA SEQUENCING: Massively Parallel Genomics. Science, 277(5324):393?395, 1997. [7] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. J. Machine Learning Research, 2003. [8] I. Guyon, J.Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification using support vector machines. Machine Learning, 46(1):389, 2002. [9] M. A. Hall and L. A. Smith. Feature selection for machine learning: Comparing a correlation-based filter approach to the wrapper. 1999. [10] H. Huang and C. Ding. Robust tensor factorization using r1 norm. CVPR 2008, pages 1?8, 2008. [11] K. Kira and L. A. Rendell. A practical approach to feature selection. In A Practical Approach to Feature Selection, pages 249?256, 1992. [12] R. Kohavi and G. H. John. Wrappers for feature subset selection. Artificial Intelligence, 97(1-2):273?324, 1997. [13] I. Kononenko. Estimating attributes: Analysis and extensions of RELIEF. In European Conference on Machine Learning, pages 171?182, 1994. [14] P. Langley. Selection of relevant features in machine learning. In AAAI Fall Symposium on Relevance, pages 140?144, 1994. [15] H. Liu and H. Motoda. Feature Selection for Knowledge Discovery and Data Mining. Springer, 1998. [16] D. Luo, C. Ding, and H. Huang. Towards structural sparsity: An explicit `2 /`0 approach. ICDM, 2010. [17] C. L. Nutt, D. R. Mani, R. A. Betensky, P. Tamayo, J. G. Cairncross, C. Ladd, U. Pohl, C. Hartmann, and M. E. Mclaughlin. Gene expression-based classification of malignant gliomas correlates better with survival than histological classification. Cancer Res., 63:1602?1607, 2003. [18] G. Obozinski, B. Taskar, and M. Jordan. Multi-task feature selection. Technical report, Department of Statistics, University of California, Berkeley, 2006. [19] H. Peng, F. Long, and C. Ding. Feature selection based on mutual information: Criteria of max-depe ndency, max-relevance, and min-redundancy. IEEE Trans. Pattern Analysis and Machine Intelligence, 27, 2005. [20] P. C. Petricoin EF, Ornstein DK. Serum proteomic patterns for detection of prostate cancer. J Natl Cancer Inst., 94(20):1576?8, 2002. [21] L. E. Raileanu and K. Stoffel. Theoretical comparison between the gini index and information gain criteria. Univeristy of Neuchatel, 2000. [22] Y. Saeys, I. Inza, and P. Larranaga. A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19):2507?2517, 2007. [23] D. Singh, P. Febbo, K. Ross, and et al. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell, pages 203?209, 2002. [24] A. I. Su, J. B. Welsh, L. M. Sapinoso, and et al. Molecular classification of human carcinomas by use of gene expression signatures. Cancer Research, 61:7388?7393, 2001. [25] L. Sun, J. Liu, J. Chen, and J. Ye. Efficient recovery of jointly sparse vectors. In Neural Information Processing Systems, 2009. [26] L. Wang, J. Zhu, and H. Zou. Hybrid huberized support vector machines for microarray classification. ICML, 2007. [27] K. Yang, Z. Cai, J. Li, and G. Lin. A stable gene selection in microarray data analysis. BMC Bioinformatics, 7:228, 2006. [28] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B, 68:49?67, 2005. 9
3988 |@word norm:50 seems:1 tamayo:1 motoda:1 elisseeff:1 reduction:1 wrapper:4 liu:2 contains:6 score:4 selecting:1 series:1 past:1 outperforms:1 existing:2 ka:5 com:1 current:2 bradley:1 comparing:1 luo:1 gmail:1 yet:1 written:3 readily:1 john:1 benign:2 eleven:1 remove:1 designed:2 intelligence:2 selected:7 beginning:1 smith:1 filtered:1 five:4 rc:3 symposium:1 yuan:1 prove:4 consists:1 peng:2 behavior:1 p1:6 sdp:2 multi:4 curse:1 considering:1 becomes:3 provided:1 discover:1 bounded:1 notation:2 moreover:1 mass:5 estimating:1 bhattacharjee:1 developed:1 guarantee:1 berkeley:1 every:1 subclass:1 charge:2 concave:1 um:1 classifier:4 k2:12 rm:3 control:1 unit:4 yn:2 positive:1 engineering:4 local:1 tends:1 limit:1 ak:3 kuit:1 approximately:2 black:1 au:6 r4:5 co:1 factorization:3 bi:1 practical:3 recursive:1 practice:2 implement:1 glioblastoma:2 procedure:2 pontil:1 langley:1 empirical:3 significantly:1 projection:1 kbi:2 protein:2 convenience:1 close:1 selection:47 interior:1 applying:1 optimize:1 equivalent:2 lagrangian:1 mrna:1 serum:1 starting:1 kidney:1 convex:6 simplicity:1 recovery:1 pure:1 importantly:1 regularize:1 variation:1 allaml:4 updated:1 fodor:1 relieff:9 programming:2 element:5 expensive:2 satisfying:1 richards:1 role:1 taskar:1 ding:6 solved:4 wang:1 thousand:2 calculate:3 sun:1 pancreas:1 decrease:3 removed:2 disease:3 ui:7 nie:1 kmi:1 signature:1 singh:1 solving:3 kmkf:1 triangle:3 easily:3 joint:9 represented:1 distinct:1 fast:1 artificial:1 gini:1 adt:1 whose:1 widely:3 solve:14 kai:3 say:2 cvpr:1 otherwise:1 statistic:3 jointly:1 noisy:1 obviously:1 cai:3 propose:5 relevant:3 combining:2 academy:1 description:2 frobenius:1 validate:1 scalability:1 rfe:1 convergence:6 optimum:6 r1:4 converges:3 liver:1 eq:29 solves:1 proteomic:6 utt:2 aml:1 popularly:1 filter:3 attribute:1 kb:2 human:4 bladder:1 elimination:1 dii:5 generalization:1 alleviate:1 bkr:2 biological:2 extension:1 hold:2 around:1 considered:1 hall:1 normal:1 estimation:1 proc:1 label:3 ross:1 sensitive:1 individually:1 grouped:1 minimization:12 genomic:3 zhou:1 derived:3 june:1 sequencing:1 biomarker:1 feipingnie:1 indicates:2 cg:1 inst:1 dependent:1 lowercase:1 eliminate:1 selects:1 arg:1 classification:23 hartmann:1 denoted:1 development:1 constrained:2 univeristy:1 initialize:1 mutual:1 evgeniou:1 having:1 ng:1 identical:2 bmc:1 vvt:1 icml:3 throughput:1 report:1 prostate:15 employ:1 primarily:1 few:3 composed:2 tightly:1 kwt:2 uta:3 individual:1 national:1 relief:1 cns:1 welsh:1 detection:1 screening:1 mining:1 semidefinite:1 uppercase:1 natl:1 devoted:1 bundle:1 penalizes:1 desired:1 re:1 theoretical:3 minimal:1 column:1 cost:1 deviation:2 subset:3 reported:1 inza:1 proximal:1 peak:1 sensitivity:1 kut:7 squared:2 aaai:1 satisfied:2 huang:3 conf:1 depe:1 derivative:2 li:1 potential:1 socp:2 summarized:1 pooled:1 coefficient:1 int:1 kvkp:1 satisfy:2 vi:5 ad:7 ornstein:1 performed:5 kwk:1 zha:1 lung:8 parallel:2 capability:1 contribution:1 minimize:1 square:2 publicly:1 accuracy:13 acid:1 kakr:2 efficiently:2 glioma:6 identification:1 produced:1 multiplying:1 worth:1 barnhill:1 definition:2 obvious:1 dm:1 associated:2 mi:2 proof:2 couple:1 gain:10 proved:4 popular:1 hhsvm:2 kmk2:2 subsection:2 dimensionality:2 ut:21 knowledge:1 amplitude:1 dt:6 arlington:4 box:1 diagnosed:3 just:1 biomedical:1 correlation:3 until:2 hand:1 su:1 saeys:1 mode:1 feiping:1 building:1 effect:2 ye:1 contain:2 y2:2 normalized:1 ccf:1 mani:1 regularization:15 reformulating:1 excluded:1 nonzero:1 iteratively:1 kak:1 m:10 generalized:1 criterion:2 ridge:1 demonstrate:2 l1:1 pro:2 rendell:1 novel:1 recently:2 mangasarian:1 ef:1 he:1 approximates:1 kwk2:3 measurement:2 rd:3 funded:1 stable:1 add:1 recent:1 massively:1 inequality:7 binary:2 vt:10 yi:7 seen:1 minimum:3 additional:1 employed:1 converge:8 monotonically:3 signal:1 ii:1 multiple:1 desirable:2 smooth:1 technical:1 characterized:1 profiling:2 cross:4 clinical:2 calculation:1 long:1 lin:2 icdm:1 molecular:2 prediction:1 regression:6 breast:1 proteomics:2 iteration:8 kernel:1 cell:2 spectrometry:3 whereas:1 nci:1 huberized:2 microarray:6 crucial:1 kohavi:1 jordan:1 structural:1 yang:1 easy:2 lasso:2 multiclass:1 texas:4 mclaughlin:1 chqding:1 biomarkers:3 six:7 expression:13 motivated:2 pca:1 generally:1 colorectal:1 detailed:1 amount:1 ten:1 category:1 dna:2 febbo:1 outperform:1 nsf:3 carcinoma:8 kira:1 diagnosis:1 databank:1 group:1 key:2 four:1 reformulation:1 threshold:2 redundancy:2 preprocessed:1 rewriting:1 verified:1 utilize:1 subgradient:1 year:1 cone:1 letter:2 arrive:3 guyon:2 putative:1 guaranteed:1 fold:5 activity:1 constraint:1 x2:2 fda:1 felt:1 speed:1 min:17 structured:1 vk0:1 according:1 department:1 across:5 smaller:1 making:1 outlier:5 invariant:4 pr:1 computationally:1 equation:1 r3:3 malignant:2 needed:1 hyperplasia:2 ge:4 adopted:1 available:1 substitute:2 original:2 top:5 clustering:1 include:1 cf:1 standardized:1 k1:1 especially:1 society:1 neuchatel:1 tensor:2 objective:5 added:1 traditional:1 diagonal:9 gradient:1 subspace:1 chris:1 topic:1 pohl:1 reason:1 boldface:2 index:1 reformulate:1 rotational:5 difficult:2 mmv:1 unknown:1 perform:1 upper:1 nucleic:1 descent:1 extended:1 y1:2 rn:5 kvk0:2 varied:2 community:1 intensity:2 introduced:2 bk:3 extensive:3 connection:1 optimized:1 california:1 nip:2 trans:1 rut:1 usually:3 pattern:2 sparsity:5 summarize:1 program:1 rf:14 including:1 max:5 royal:1 hybrid:2 regularized:2 kek2:1 residual:2 squamous:1 zhu:1 improve:1 brief:1 extract:1 coupled:1 genomics:2 review:2 literature:2 discovery:3 acknowledgement:1 embedded:2 loss:10 validation:4 xiao:2 heng:2 playing:1 row:3 cancer:15 surprisingly:1 repeat:1 histological:1 biomolecules:1 bias:2 side:1 fall:1 taking:1 sparse:2 dimension:1 xn:2 valid:2 calculated:2 uit:5 made:1 commonly:1 preprocessing:2 ig:3 correlate:2 gene:28 global:8 reveals:1 ovary:1 kononenko:1 xi:8 spectrum:1 don:1 iterative:5 ladd:1 table:2 as1:1 mj:1 robust:13 du:2 kui:1 investigated:1 european:1 zou:1 main:1 whole:1 repeated:1 x1:2 fig:1 slow:1 explicit:1 adenocarcinoma:2 theorem:2 emphasizing:2 kuk2:6 specific:3 xt:4 r2:2 dk:3 svm:10 evidence:2 survival:1 workshop:1 vapnik:1 kr:10 importance:1 chen:1 absorbed:1 ux:3 scalar:1 kwj:1 springer:1 mij:4 satisfies:1 obozinski:1 weston:1 identity:2 mrmr:10 towards:1 lemma:3 tumor:1 total:5 called:1 principal:1 experimental:2 meaningful:2 select:3 support:5 bioinformatics:13 relevance:3 argyriou:2
3,299
3,989
Mixture of time -warped trajectory models for movement decoding Elaine A. Corbett, Eric J. Perreault and Konrad P. K?rding Northwestern University Chicago, IL 60611 [email protected] Abstract Applications of Brain-Machine-Interfaces typically estimate user intent based on biological signals that are under voluntary control. For example, we might want to estimate how a patient with a paralyzed arm wants to move based on residual muscle activity. To solve such problems it is necessary to integrate obtained information over time. To do so, state of the art approaches typically use a probabilistic model of how the state, e.g. position and velocity of the arm, evolves over time ? a so-called trajectory model. We wanted to further develop this approach using two intuitive insights: (1) At any given point of time there may be a small set of likely movement targets, potentially identified by the location of objects in the workspace or by gaze information from the user. (2) The user may want to produce movements at varying speeds. We thus use a generative model with a trajectory model incorporating these insights. Approximate inference on that generative model is implemented using a mixture of extended Kalman filters. We find that the resulting algorithm allows us to decode arm movements dramatically better than when we use a trajectory model with linear dynamics. 1 In trod u cti on When patients have lost a limb or the ability to communicate with the outside world, brain machine interfaces (BMIs) are often used to enable robotic prostheses or restore communication. To achieve this, the user's intended state of the device must be decoded from biological signals. In the context of Bayesian statistics, two aspects are important for the design of an estimator of a temporally evolving state: the observation model, which describes how measured variables relate to the system?s state and the trajectory model which describes how the state changes over time in a probabilistic manner. Following this logic many recent BMI applications have relied on Bayesian estimation for a wide range of problems including the decoding of intended human [1] and animal [2] movements. In the context of BMIs, Bayesian approaches offer a principled way of formalizing the uncertainty about signals and thus often result in improvements over other signal processing techniques [1]-[3]. Most work on state estimation in dynamical systems has assumed linear dynamics and Gaussian noise. Under these circumstances, efficient algorithms result from belief propagation. The most frequent application uses the Kalman filter (KF), which recursively combines noisy state observations with the probabilistic evolution of state defined by the trajectory model to estimate the marginal distribution over states [4]. Such approaches have been used widely for applications including upper [1] and lower [5] extremity prosthetic 1 devices, functional electric stimulation [6] and human computer interactions [7]. As these algorithms are so commonly used, it seems promising to develop extensions to nonlinear trajectory models that may better describe the probabilistic distribution of movements in everyday life. One salient departure from the standard assumptions is that people tend to produce both slow and fast movements, depending on the situation. Models with linear dynamics only allow such deviation through the noise term, which makes these models poor at describing the natural variation of movement speeds during real world tasks. Explicitly incorporating movement speed into the trajectory model should lead to better movement estimates. Knowledge of the target position should also strongly affect trajectory models. After all , we tend to accelerate our arm early during movement and slow down later on. Target information can be linearly incorporated into the trajectory model, and this has greatly improved predictions [8]-[12]. Alternatively, if there are a small number of potential targets then a mixture of trajectory models approach [13] can be used. Here we are interested in the case where available data provide a prior over potential t argets but where movement targets may be anywhere. We want to incorporate target uncertainty and allow generalization to novel targets. Prior information about potential targets could come from a number of sources but would generally be noisy. For example, activity in the dorsal premotor cortex provides information about intended target location prior to movement and may be used where such recordings are available [14]. Target information may also be found noninvasively by tracking eye movements. However, such data will generally provide non-zero priors for a number of possible target locations as the subject saccades over the scene. While subjects almost always look at a target before reaching for it [15], there may be a delay of up to a second between looking at the target and the reach ? a time interval over which up to 3 saccades are typically made. Each of these fixations could be the target. Hence, a probabilistic distribution of targets is appropriate when using either neural recordings or eye tracking to estimate potential reach targets Here we present an algorithm that uses a mixture of extended Kalman Filters (EKFs) to combine our insights related to the variation of movement speed and the availability of probabilistic target knowledge. Each of the mixture component s allows the speed of the movement to vary continuously over time. We tested how well we could use EMGs and eye movements to decode hand position of humans performing a three -dimensional large workspace reaching task. We find that using a trajectory model that allows for probabilistic target information and variation of speed leads to dramatic improvements in decoding quality. 2 Gen e ral Decod i n g S etti n g We wanted to test how well different decoding algorithms can decode human movement, over a wide range of dynamics. While many recent studies have looked at more restrictive, two-dimensional movements, a system to restore arm function should produce a wide range of 3D trajectories. We recorded arm kinematics and EMGs of healthy subjects during unconstrained 3D reaches to targets over a large workspace. Two healthy subjects were asked to reach at slow, normal and fast speeds, as they would in everyday life. Subjects were seated as they reached towards 16 LEDs in blocks of 150s, which were located on two planes positioned such that all targets were just reachable (Fig 1A). The target LED was lit for one second prior to an auditory go cue, at which time the subject would reach to the target at the appropriate speed. Slow, normal and fast reaches were allotted 3 s, 1.5s and 1s respectively; however, subjects determined the speed. An approximate total of 450 reaches were performed per subject. The subjects provided informed consent, and the protocol was approved by the Northwestern University Institutional Review Board. EMG signals were measured from the pectoralis major, and the three deltoid muscles of the shoulder. This represents a small subset of the muscles involved in reaching, and approximates those muscles retaining some voluntary control following mid-level cervical spinal cord injuries. 2 The EMG signals were band-pass filtered between 10 and 1,000 Hz, and subsequently anti aliased filtered. Hand, wrist, shoulder and head positions were tracked using an Optotrak motion analysis system. We simultaneously recorded eye movements with an ASL EYETRAC-6 head mounted eye tracker. Approximately 25% of the reaches were assigned to the test set, and the rest were used for training. Reaches for which either the motion capture data was incomplete, or there was visible motion artifact on the EMG were removed. As the state we used hand positions and joint angles (3 shoulder, 2 elbow, position, velocity and acceleration, 24 dimensions). Joint angles were calculated from the shoulder and wrist marker data using digitized bony landmarks which defined a coordinate system for the upper limb as detailed by Wu et al. [16]. As the motion data were sampled at 60Hz, the mean absolute value o f the EMG in the corresponding 16.7ms windows was used as an observation of the state at each time-step. Algorithm accuracy was quantified by normalizing the root -mean-squared error by the straight line distance between the first and final position of the endpoint for each reach. We compared the algorithms statistically using repeated measures ANOVAs with Tukey post -hoc tests, treating reach and subject as random effects. In the rest of the paper we will ask how well these reaching movements can be decoded from EMG and eye-tracking data. Figure 1: A Experimental setup and B sample kinematics and processed EMGs for one reach 3 Kal man Fi l ters w i th Target i n f ormati on All models that we consider in this paper assume linear observations with Gaussian noise: (1) where x is the state, y is the observation and v is the measurement noise with p(v) ~ N(0,R), and R is the observation covariance matrix. The model fitted the measured EMGs with an average r2 of 0.55. This highlights the need to integrate information over time. The standard approach also assumes linear dynamics and Gaussian process noise: (2) where, x t represents the hand and joint angle positions, w is the process noise with p(w) ~ N(0,Q), and Q is the state covariance matrix. The Kalman filter does optimal inference for this generative model. This model can effectively capture the dynamics of stereotypical reaches to a single target by appropriately tuning its parameters. However, when used to describe reaches to multiple targets, the model cannot describe target dependent aspects of reaching but boils down to a random drift model. Fast velocities are underestimated as they are unlikely under the trajectory model and there is excessive drift close to the target (Fig. 2A). 3 In many decoding applications we may know the subject?s target. A range of recent studies have addressed the issue of incorporating this information into the trajectory model [8, 13], and we might assume the effect of the target on the dynamics to be linear. This naturally suggests adding the target to the state space, which works well in practice [9, 12]. By appending the target to the state vector (KFT), the simple linear format of the KF may be retained: (3) where xTt is the vector of target positions, with dimensionality less than or equal to that of xt. This trajectory model thus allows describing both the rapid acceleration that characterizes the beginning of a reach and the stabilization towards its end. We compared the accuracy of the KF and the KFT to the Single Target Model (STM), a KF trained only on reaches to the target being tested (Fig. 2). The STM represents the best possible prediction that could be obtained with a Kalman filter. Assuming the target is perfectly known, we implemented the KFT by correctly initializing the target state xT at the beginning of the reach. We will relax this assumption below. The initial hand and joint angle positions were also assumed to be known. Figure 2: A Sample reach and predictions and B average accuracies with standard errors for KFT, KF and MTM. Consistent with the recent literature, both methods that incorporated target information produced higher prediction accuracy than the standard KF (both p<0.0001). Interestingly, there was no significant difference between the KFT and the STM (p=0.9). It seems that when we have knowledge of the target, we do not lose much by training a single model over the whole workspace rather than modeling the targets individually. This is encouraging, as we desire a BMI system that can generalize to any target within the workspace, not just specifically to those that are available in the training data. Clearly, adding the target to the state space allows the dynamics of typical movements to be modeled effectively, resulting in dramatic increases in decoding performance. 4 Ti me Warp i n g 4.1 I m p l e m e n t i n g a t i m e - w a r p e d t r a j e c t o r y mo d e l While the KFT above can capture the general reach trajectory profile, it does not allow for natural variability in the speed of movements. Depending on our task objectives, which would not directly be observed by a BMI, we might lazily reach toward a target or move a t maximal speed. We aim to change the trajectory model to explicitly incorporate a warping factor by which the average movement speed is scaled, allowing for such variability. As the movement speed will be positive in all practical cases, we model the logarithm of this factor, 4 and append it to the state vector: (4) We create a time-warped trajectory model by noting that if the average rate of a trajectory is to be scaled by a factor S, the position at time t will equal that of the original trajectory at time St. Differentiating, the velocity will be multiplied by S, and the acceleration by S 2. For simplicity, the trajectory noise is assumed to be additive and Gaussian, and the model is assumed to be stationary: (5) where Ip is the p-dimensional identity matrix and is a p p matrix of zeros. Only the terms used to predict the acceleration states need to be estimated to build the state transition matrix, and they are scaled as a nonlinear function of xs. After adding the variable movement speed to the state space the system is no longer linear. Therefore we need a different solution strategy. Instead of the typical KFT we use the Extended Kalman Filter (EKFT) to implement a nonlinear trajectory model by linearizing the dynamics around the best estimate at each time-step [17]. With this approach we add only small computational overhead to the KFT recursions. 4.2 Tr a i n i n g t h e t i m e w a r p i n g mo d e l The filter parameters were trained using a variant of the Expectation Maximization (EM) algorithm [18]. For extended Kalman filter learning the initialization for the variables may matter. S was initialized with the ground truth average reach speeds for each movement relative to the average speed across all movements. The state transition parameters were estimated using nonlinear least squares regression, while C, Q and R were estimated linearly for the new system, using the maximum likelihood solution [18] (M-step). For the E-step we used a standard extended Kalman smoother. We thus found the expected values for t he states given the current filter parameters. For this computation, and later when testing the algorithm, xs was initialized to its average value across all reaches while the remaining states were initialized to their true values. The smoothed estimate fo r xs was then used, along with the true values for the other states, to re-estimate the filter parameters in the M-step as before. We alternated between the E and M steps until the log likelihood converged (which it did in all cases). Following the training procedure, the diagonal of the state covariance matrix Q corresponding to xs was set to the variance of the smoothed xs over all reaches, according to how much this state should be allowed to change during prediction. This allowed the estimate of xs to develop over the course of the reach due to the evidence provided by the observations, better capturing the dynamics of reaches at different speeds. 4.3 P e r f o r ma n c e o f t h e t i m e - w a r p e d E K F T Incorporating time warping explicitly into the trajectory model pro duced a noticeable increase in decoding performance over the KFT. As the speed state xs is estimated throughout the course of the reach, based on the evidence provided by the observations, the trajectory model has the flexibility to follow the dynamics of the reach more accurately (Fig. 3). While at the normal self-selected speed the difference between the algorithms is small, for the slow and fast speeds, where the dynamics deviate from average, there i s a clear advantage to the time warping model. 5 Figure 3: Hand positions and predictions of the KFT and EKFT for sample reaches at A slow, B normal and C fast speeds. Note the different time scales between reaches. The models were first trained using data from all speeds (Fig. 4A). The EKFT was 1.8% more accurate on average (p<0.01), and the effect was significant at the slow (1.9%, p<0.05) and the fast (2.8%, p<0.01), but not at the normal (p=0.3) speed. We also trained the models from data using only reaches at the self-selected normal speed, as we wanted to see if there was enough variation to effectively train the EKFT (Fig. 4B). Interestingly, the performance of the EKFT was reduced by only 0.6%, and the KFT by 1.1%. The difference in performance between the EKFT and KFT was even more pronounced on aver age (2.3%, p<0.001), and for the slow and fast speeds (3.6 and 4.1%, both p< 0.0001). At the normal speed, the algorithms again were not statistically different (p=0.6). This result demonstrates that the EKFT is a practical option for a real BMI system, as it is not necessary to greatly vary the speeds while collecting training data for the model to be effective over a wide range of intended speeds. Explicitly incorporating speed information into the trajectory model helps decoding, by modeling the natural variation in volitional speed. Figure 4: Mean and standard error of EKFT and KFT accuracy at the different subjectselected speeds. Models were trained on reaches at A all speeds and B just normal speed reaches. Asterisks indicate statistically significant differences between the algorithms. 5 Mi xtu res of Target s So far, we have assumed that the targets of our reaches are perfectly known. In a real-world system, there will be uncertainty about the intended target of the reach. However, in typical applications there are a small number of possible objectives. Here we address this situation. Drawing on the recent literature, we use a mixture model to consider each of the possible targets [11, 13]. We condition the posterior probability for the state on the N possible targets, T: (6) 6 Using Bayes' Rule, this equation becomes: (7) As we are dealing with a mixture model, we perform the Kalman filter recursion for each possible target, xT, and our solution is a weighted sum of the outputs. The weights are proportional to the prior for that target, , and the likelihood of the model given that target . is independent of the target and does not need to be calculated. We tested mixtures of both algorithms, the mKFT and mEKFT, with real uncert ain priors obtained from eye-tracking in the one-second period preceding movement. As the targets were situated on two planes, the three-dimensional location of the eye gaze was found by projecting its direction onto those planes. The first, middle and last eye samples were selected, and all other samples were assigned to a group according to which of the three was closest. The mean and variance of these three groups were used to initialize three Kalman filters in the mixture model. The priors of the three groups were assigned proportional to the number of samples in them. If the subject looks at multiple positions prior to reaching, this method ensures with a high probability that the correct target was accounted for in one of the filters in the mixture. We also compared the MTM approach of Yu et al. [13], where a different KF model was generated for each target, and a mixture is performed over these models. This approach explicitly captures the dynamics of stereotypical reaches to specific targets. Given perfect target information, it would reduce to the STM described above. Priors for the MTM were found by assigning each valid eye sample to its closest two targets, and weighting the models proportional to the number of samples assigned to the corresponding target, divided by its distance from the mean of those samples. We tried other ways of assigning priors and the one presented gave the best results. We calculated the reduction in decoding quality when instead of perfect priors we provide eye-movement based noisy priors (Fig. 5). The accuracies of the mEKFT, the mKFT and the MTM were only degraded by 0.8, 1.9 and 2.1% respectively, compared to the perfect prior situation. The mEKFT was still close to 10% better than the KF. The mixture model framework is effective in accounting for uncertain priors. Figure 5: Mean and standard errors of accuracy for algorithms with perfect priors, and uncertain priors with full and partial training set. The asterisk indicates a statistically significant effects between the two training types, where real priors are used. Here, only reaches at normal speed were used to train the models, as this is a more realistic training set for a BMI application. This accounts for the degraded performance of the MTM with perfect priors relative to the STM from above (Fig. 2). With even more stereotyped training data for each target, the MTM doesn't generalize as well to new speeds. 7 We also wanted to know if the algorithms could generalize to new targets. In a real application, the available training data will generally not span the entire useable worksp ace. We compared the algorithms where reaches to all targets except the one being tested had been used to train the models. The performance of the MTM was significantly de graded unsurprisingly, as it was designed for reaches to a set of known targets. Performance of the mKFT and mEKFT degraded by about 1%, but not significantly (both p>0.7), demonstrating that the continuous approach to target information is preferable when the target could be anywhere in space, not just at locations for which training data is available. 6 Di scu ssi on and concl u si on s The goal of this work was to design a trajectory model that would improve decoding for BMIs with an application to reaching. We incorporated two features that prominently influence the dynamics of natural reach: the movement speed and the target location. Our approach is appropriate where uncertain target information is available. The model generalizes well to new regions of the workspace for which there is no training data, and across a broad range of reaching dynamics to widely spaced targets in three dimensions. The advantages over linear models in decoding precision we report here could be equally obtained using mixtures over many targets and speeds. While mixture models [11, 13] could allow for slow versus fast movements and any number of potential targets, this strategy will generally require many mixture components. Such an approach would require a lot more training data, as we have shown that it does not generalize well. It would also be run-time intensive which is problematic for prosthetic devices that rely on low power controllers. In contrast, the algorithm introduced here only takes a small amount of additional run-time in comparison to the standard KF approach. The EKF is only marginally slower than the standard KF and the algorithm will not generally need to consider more than 3 mixture components assuming the subject fixates the target within the second pre ceding the reach. In this paper we assumed that subjects always would fixate a reach target ? along with other non-targets. While this is close to the way humans usually coordinate eyes and reaches [15], there might be cases where people do not fixate a reach target. Our approach could be easily extended to deal with such situations by adding a dummy mixture component that all ows the description of movements to any target. As an alternative to mixture approaches, a system can explicitly estimate the target position in the state vector [9]. This approach, however, would not straightforwardly allow for the rich target information available; we look at the target but also at other locations, strongly suggesting mixture distributions. A combination of the two approaches could further improve decoding quality. We could both estimate speed and target position for the EKFT in a continuous manner while retaining the mixture over target priors. We believe that the issues that we have addressed here are almost universal. Virtually all types of movements are executed at varying speed. A probabilistic distribution for a small number of action candidates may also be expected in most BMI applications ? after all there are usually only a small number of actions that make sense in a given environment. While this work is presented in the context of decoding human reaching, it may be applied to a wide range of BMI applications including lower limb prosthetic devices and human computer interactions, as well as different signal sources such as electrode grid recordings and electroencephalograms. The increased user control in conveying their intended movements would significantly improve the functionality of a neuroprosthetic device. A c k n o w l e d g e me n t s T h e a u t h o r s t h a n k T. H a s w e l l , E . K r e p k o v i c h , a n d V. Ravichandran for assistance with experiments. This work was funded by the NSF Program in Cyber-Physical Systems. R e f e re n c e s [1] L.R. Hochberg, M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh, A.H. Caplan, A. Branner, D. 8 [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] Chen, R.D. Penn, and J.P. Donoghue, ?Neuronal ensemble control of prosthetic devices by a human with tetraplegia,? Nature, vol. 442, 2006, pp. 164?171. W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black, ?Bayesian population decoding of motor cortical activity using a Kalman filter,? Neural Computation, vol. 18, 2006, pp. 80?118. W. Wu, M.J. Black, Y. Gao, E. Bienenstock, M. Serruya, A. Shaikhouni, and J.P. Donoghue, ?Neural decoding of cursor motion using a Kalman filter,? Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference, 2003, p. 133. R.E. Kalman, ?A new approach to linear filtering and prediction problems,? Journal of basic Engineering, vol. 82, 1960, pp. 35?45. G.G. Scandaroli, G.A. Borges, J.Y. Ishihara, M.H. Terra, A.F.D. Rocha, and F.A.D.O. Nascimento, ?Estimation of foot orientation with respect to ground for an above knee robotic prosthesis,? Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems, St. Louis, MO, USA: IEEE Press, 2009, pp. 1112-1117. I. Cikajlo, Z. Matja?i?, and T. Bajd, ?Efficient FES triggering applying Kalman filter during sensory supported treadmill walking,? Journal of Medical Engineering & Technology, vol. 32, 2008, pp. 133144. S. Kim, J.D. Simeral, L.R. Hochberg, J.P. Donoghue, and M.J. Black, ?Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia,? Journal of Neural Engineering, vol. 5, 2008, pp. 455-476. L. Srinivasan, U.T. Eden, A.S. Willsky, and E.N. Brown, ?A state-space analysis for reconstruction of goal-directed movements using neural signals,? Neural computation, vol. 18, 2006, pp. 2465?2494. G.H. Mulliken, S. Musallam, and R.A. Andersen, ?Decoding trajectories from posterior parietal cortex ensembles,? Journal of Neuroscience, vol. 28, 2008, p. 12913. W. Wu, J.E. Kulkarni, N.G. Hatsopoulos, and L. Paninski, ?Neural Decoding of Hand Motion Using a Linear State-Space Model With Hidden States,? IEEE Transactions on neural systems and rehabilitation engineering, vol. 17, 2009, p. 1. J.E. Kulkarni and L. Paninski, ?State-space decoding of goal-directed movements,? IEEE Signal Processing Magazine, vol. 25, 2008, p. 78. C. Kemere and T. Meng, ?Optimal estimation of feed-forward-controlled linear systems,? IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP'05), 2005. B.M. Yu, C. Kemere, G. Santhanam, A. Afshar, S.I. Ryu, T.H. Meng, M. Sahani, and K.V. Shenoy, ?Mixture of trajectory models for neural decoding of goal-directed movements,? Journal of neurophysiology, vol. 97, 2007, p. 3763. N. Hatsopoulos, J. Joshi, and J.G. O'Leary, ?Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles,? Journal of neurophysiology, vol. 92, 2004, p. 1165. R.S. Johansson, G. Westling, A. Backstrom, and J.R. Flanagan, ?Eye-hand coordination in object manipulation,? Journal of Neuroscience, vol. 21, 2001, p. 6917. G. Wu, F.C. van der Helm, H.E.J. Veeger, M. Makhsous, P. Van Roy, C. Anglin, J. Nagels, A.R. Karduna, and K. McQuade, ?ISB recommendation on definitions of joint coordinate systems of various joints for the reporting of human joint motion?Part II: shoulder, elbow, wrist and hand,? Journal of biomechanics, vol. 38, 2005, pp. 981?992. D. Simon, Optimal state estimation: Kalman, H [infinity] and nonlinear approaches, John Wiley and Sons, 2006. Z. Ghahramani and G.E. Hinton, ?Parameter estimation for linear dynamical systems,? University of Toronto technical report CRG-TR-96-2, vol. 6, 1996. 9
3989 |@word neurophysiology:2 middle:1 seems:2 johansson:1 approved:1 tried:1 covariance:3 accounting:1 dramatic:2 tr:2 recursively:1 reduction:1 initial:1 interestingly:2 current:1 si:1 assigning:2 must:1 kft:13 john:1 chicago:1 visible:1 additive:1 realistic:1 wanted:4 motor:4 treating:1 designed:1 stationary:1 generative:3 cue:1 device:6 selected:3 plane:3 beginning:2 filtered:2 provides:1 location:7 toronto:1 along:2 simeral:1 fixation:1 combine:2 overhead:1 borges:1 manner:2 rding:1 expected:2 rapid:1 behavior:1 mukand:1 brain:2 encouraging:1 window:1 elbow:2 becomes:1 provided:3 stm:5 formalizing:1 aliased:1 informed:1 collecting:1 ti:1 preferable:1 scaled:3 demonstrates:1 control:5 penn:1 medical:1 louis:1 shenoy:1 uncert:1 before:2 engineering:4 positive:1 extremity:1 meng:2 approximately:1 might:4 black:3 initialization:1 quantified:1 suggests:1 range:7 statistically:4 directed:3 practical:2 shaikhouni:1 wrist:3 testing:1 lost:1 block:1 practice:1 implement:1 useable:1 flanagan:1 procedure:1 universal:1 evolving:1 significantly:3 pre:1 cannot:1 close:3 onto:1 scu:1 ravichandran:1 context:3 influence:1 applying:1 go:1 simplicity:1 knee:1 insight:3 estimator:1 stereotypical:2 rule:1 rocha:1 population:1 variation:5 coordinate:3 target:79 user:5 decode:3 magazine:1 us:2 velocity:5 roy:1 located:1 walking:1 observed:1 initializing:1 capture:4 cord:1 ensures:1 region:1 elaine:1 movement:38 removed:1 hatsopoulos:2 principled:1 environment:1 asked:1 dynamic:15 trained:5 eric:1 accelerate:1 joint:7 easily:1 icassp:1 various:1 train:3 fast:9 describe:3 effective:2 outside:1 premotor:2 widely:2 solve:1 ace:1 relax:1 drawing:1 ability:1 statistic:1 noisy:3 final:1 ip:1 hoc:1 advantage:2 reconstruction:1 interaction:2 maximal:1 frequent:1 gen:1 consent:1 flexibility:1 achieve:1 decod:1 treadmill:1 intuitive:1 fixates:1 pronounced:1 everyday:2 description:1 electrode:1 produce:3 perfect:5 object:2 help:1 depending:2 develop:3 measured:3 noticeable:1 implemented:2 come:1 indicate:1 direction:1 foot:1 correct:1 functionality:1 filter:16 subsequently:1 stabilization:1 human:10 enable:1 require:2 generalization:1 anglin:1 biological:2 crg:1 extension:1 tracker:1 around:1 ground:2 normal:9 branner:1 predict:1 mo:3 major:1 vary:2 early:1 institutional:1 estimation:6 lose:1 healthy:2 coordination:1 ain:1 individually:1 create:1 weighted:1 clearly:1 gaussian:4 always:2 ekf:1 aim:1 reaching:9 mtm:7 rather:1 varying:2 improvement:2 ral:1 likelihood:3 indicates:1 greatly:2 contrast:1 kim:1 sense:1 caplan:1 inference:2 dependent:1 typically:3 unlikely:1 entire:1 bienenstock:2 hidden:1 interested:1 issue:2 orientation:1 retaining:2 animal:1 art:1 initialize:1 marginal:1 equal:2 represents:3 lit:1 look:3 yu:2 excessive:1 broad:1 report:2 intelligent:1 simultaneously:1 intended:6 mixture:21 accurate:1 partial:1 necessary:2 trod:1 incomplete:1 logarithm:1 initialized:3 prosthesis:2 re:3 fitted:1 uncertain:3 increased:1 modeling:2 injury:1 maximization:1 deviation:1 subset:1 delay:1 straightforwardly:1 emg:9 st:2 international:2 terra:1 workspace:6 probabilistic:8 decoding:21 gaze:2 continuously:1 leary:1 squared:1 again:1 recorded:2 andersen:1 warped:2 account:1 potential:5 suggesting:1 de:1 availability:1 matter:1 explicitly:6 later:2 performed:2 root:1 lot:1 tukey:1 characterizes:1 reached:1 relied:1 option:1 bayes:1 simon:1 il:1 square:1 afshar:1 accuracy:7 variance:2 degraded:3 ensemble:3 spaced:1 conveying:1 generalize:4 bayesian:4 accurately:1 produced:1 marginally:1 trajectory:29 straight:1 converged:1 reach:43 fo:1 definition:1 pp:8 involved:1 fixate:2 naturally:1 mi:1 di:1 boil:1 sampled:1 auditory:1 ask:1 knowledge:3 dimensionality:1 positioned:1 friehs:1 feed:1 higher:1 follow:1 improved:1 strongly:2 optotrak:1 anywhere:2 just:4 until:1 hand:9 nonlinear:5 marker:1 propagation:1 quality:3 artifact:1 believe:1 usa:1 effect:4 brown:1 asl:1 true:2 evolution:1 hence:1 assigned:4 deal:1 konrad:1 during:5 self:2 assistance:1 m:1 linearizing:1 electroencephalogram:1 motion:7 interface:2 pro:1 tetraplegia:2 novel:1 fi:1 functional:1 stimulation:1 physical:1 tracked:1 spiking:1 spinal:1 endpoint:1 he:1 approximates:1 measurement:1 significant:4 tuning:1 unconstrained:1 grid:1 had:1 reachable:1 concl:1 funded:1 robot:1 cortex:2 longer:1 add:1 posterior:2 closest:2 recent:5 manipulation:1 life:2 der:1 muscle:4 additional:1 preceding:1 period:1 signal:10 paralyzed:1 smoother:1 multiple:2 full:1 ii:1 kemere:2 technical:1 offer:1 biomechanics:1 divided:1 post:1 equally:1 controlled:1 prediction:7 variant:1 regression:1 basic:1 controller:1 patient:2 circumstance:1 expectation:1 serruya:2 want:4 interval:1 underestimated:1 addressed:2 source:2 appropriately:1 rest:2 recording:3 tend:2 subject:14 hz:2 virtually:1 cyber:1 joshi:1 noting:1 enough:1 affect:1 gave:1 identified:1 perfectly:2 triggering:1 reduce:1 intensive:1 donoghue:4 speech:1 action:2 dramatically:1 generally:5 detailed:1 clear:1 amount:1 mid:1 band:1 situated:1 processed:1 reduced:1 cervical:1 problematic:1 nsf:1 estimated:4 neuroscience:2 per:1 correctly:1 dummy:1 discrete:1 vol:14 srinivasan:1 group:3 santhanam:1 salient:1 demonstrating:1 eden:1 xtu:1 anova:1 volitional:1 sum:1 run:2 angle:4 uncertainty:3 communicate:1 reporting:1 almost:2 throughout:1 wu:5 hochberg:2 capturing:1 activity:4 infinity:1 scene:1 prosthetic:4 aspect:2 speed:39 span:1 performing:1 format:1 according:2 combination:1 poor:1 describes:2 across:3 em:1 son:1 backstrom:1 evolves:1 rehabilitation:1 projecting:1 equation:1 describing:2 kinematics:2 know:2 end:1 available:7 generalizes:1 multiplied:1 limb:3 appropriate:3 appending:1 alternative:1 slower:1 original:1 assumes:1 remaining:1 restrictive:1 ghahramani:1 build:1 graded:1 rsj:1 warping:3 move:2 objective:2 looked:1 strategy:2 diagonal:1 distance:2 landmark:1 me:2 toward:1 willsky:1 assuming:2 kalman:15 retained:1 modeled:1 setup:1 executed:1 fe:1 potentially:1 relate:1 intent:1 append:1 design:2 perform:1 allowing:1 upper:2 observation:8 anti:1 parietal:1 voluntary:2 situation:4 hinton:1 extended:6 communication:1 incorporated:3 looking:1 shoulder:5 head:2 digitized:1 variability:2 smoothed:2 duced:1 drift:2 introduced:1 deltoid:1 acoustic:1 ryu:1 address:1 dynamical:2 below:1 usually:2 departure:1 program:1 including:3 belief:1 power:1 natural:4 rely:1 restore:2 residual:1 recursion:2 arm:6 improve:3 technology:1 eye:13 temporally:1 alternated:1 sahani:1 deviate:1 prior:20 review:1 literature:2 nascimento:1 kf:10 xtt:1 relative:2 unsurprisingly:1 ows:1 highlight:1 northwestern:3 mounted:1 proportional:3 filtering:1 versus:1 age:1 asterisk:2 integrate:2 consistent:1 seated:1 course:2 accounted:1 supported:1 last:1 allow:5 warp:1 wide:5 musallam:1 differentiating:1 absolute:1 van:2 dimension:2 neuroprosthetic:1 calculated:3 world:3 transition:2 noninvasively:1 valid:1 doesn:1 ssi:1 commonly:1 made:1 rich:1 sensory:1 forward:1 far:1 transaction:1 approximate:2 logic:1 dealing:1 robotic:2 isb:1 assumed:6 corbett:1 alternatively:1 continuous:3 helm:1 lazily:1 promising:1 nature:1 electric:1 protocol:1 did:1 cortical:3 bmi:10 linearly:2 stereotyped:1 whole:1 noise:7 profile:1 repeated:1 allowed:2 neuronal:1 fig:8 board:1 slow:9 wiley:1 precision:1 position:15 decoded:2 prominently:1 candidate:1 weighting:1 aver:1 down:2 xt:3 specific:1 r2:1 x:7 normalizing:1 evidence:2 incorporating:5 adding:4 effectively:3 ekfs:1 cursor:2 chen:1 led:2 paninski:2 likely:1 gao:2 desire:1 tracking:4 ters:1 saccade:2 recommendation:1 truth:1 ma:1 saleh:1 cti:1 identity:1 goal:4 acceleration:4 towards:2 man:1 change:3 determined:1 specifically:1 typical:3 except:1 called:1 total:1 pas:1 experimental:1 allotted:1 people:2 dorsal:1 kulkarni:2 incorporate:2 tested:4